Short update about our problem with the migration of the instance yesterday. We've tested everything before but not an import with the real data.
So it happened that the old image size was 50% of the new disk. Not a big problem I thought because the qcow2 migration and import process should strip off some empty parts of the image. At the end the new image should have a size of 350GB not 470GB. That would have fit the new disk perfectly along the old image.
There will be a longer downtime (approx. 2-3 hours) of our instance chaos.social this evening starting at 19:00 CET (UTC+1). We need to move the system to a new host with more storage. #mastoadmin
JBOD, the other option they recommend for a much more specific usecase: "Using Just a Bunch of Drives (JBOD) in independent drive mode with Ceph is supported when using all Solid State Drives (SSDs), or for configurations with high numbers of drives per controller, for example, 60 drives attached to one controller.
What a fun! :)
Now the headline for this topic in the docs is "avoid RAID" and, as usual, isn't a good choice for the recommendation they give in the paragraph.
"Red Hat recommends that each hard drive be exported separately from the RAID controller as a single volume with write-back caching enabled."
So they indeed recommend RAID but only in the RAID 0 for every single disk configuration with a battery-backed controller.
After that I searched for the, apparently religious, question if it is recommended to use a single disk RAID 0 or the JBOD mode of the controller underlying the ceph clusters OSDs.
After no clear result I decided to go by the official red heat docs. As they seem to be the best source of truth for ceph and are a lot better than the ceph project official docs. Well... maybe a result of rad hat buying ceph a while ago.
Today, building a new grafana dashboard for ceph, I found that the OSDs on one host perform much better than on the others. After this observation and a round of debugging I found out that the controllers where configured differently.
The two hosts, with the much slower apply and commit latency used the JBOD mode of the controller, the faster one used single disk RAID 0 to provide the disks to ceph (with a write-back cache).
There will be a bigger downtime (2 to 3 hours) of chaos.social in the next few days while migrating the instance to a new system with more storage. There will be a toot by @ordnung some time before.
The #radicale project (selfhosting carddav/caldav solution) is currently looking for some new core maintainers.
Please re-toot and share it in your community if you want to help.
6000€ Subvention pro E-PKW, wenn dieser >=40000€ kostet - statt was für die Umwelt zu tun (und z.B. 25 Jahre Bahncard 50 zu spendieren wenn man kein Auto kauft), geben wir Geld über Leute die sich eh schon überteuerte Autos leisten können an die Autoindustrie.
Das Gegenteil von Bedürftigkeitsprüfung inbegriffen - wenn du kein Geld hast, bekommst du keins.
Wenn die "Prämie" ausläuft werden die Autos dann spontan 6000€ billiger, wetten?
If your big project needs complicated contribution rules and guidelines which force me to spend a multiple of the time I needed to fix your documentation bug or anything else. Then I will just not contribute. 🙄 Make it easy to contribute small things in the first place, especially the documentation. This will help your project even if it adds a little overhead in your planing.