![]() When testing, remember to run a sync between deleting the file and checking its object with zdb. Then run an ls -i on it to find its object id which you can view with zdb -dddd gfs/test. You might try to zfs create gfs/test and then create a sample file there. If you do something like echo blah > blah.txt and then remove it, is its space freed? This is a lot easier to test in a non-root file system. I'm also wondering if your whole file system becomes "stuck" at this point or whether newly-created files that are created outside your logging application are deleted properly. You'll need to find the sequence and timing of the system calls used by your program to try to help reproduce this. I don't think I can come up with a reproducer on my own. Maybe it is related to the xattr issue I've discovered but this seems like something different. If they're really not open, it seems there must be some weird sequence of system calls being performed on them that cause them to be left in this condition. Those show every sign of being deleted while open but I know you have tried killing all your processes. ![]() Zdb -d stor2 after create/delete of 100 GB/100k filesĭataset mos, ID 0,cr_txg 4, 2.90M, 305 objectsĭataset stor2, ID 21, cr_txg 1 4.12M, 10 objectsĭataset mos, ID 0,cr_txg 4, 2.90M, 306 objectsĭataset stor2, ID 21, cr_txg 1 4.30M, 10 objectsĭataset mos, ID 0,cr_txg 4, 2.87M, 306 objectsĭataset mos, ID 0,cr_txg 4, 3.13M, 306 objectsĭataset stor2, ID 21, cr_txg 1 4.42M, 10 objectsĭataset mos, ID 0,cr_txg 4, 2.95M, 306 objectsĭataset stor2, ID 21, cr_txg 1 4.52M, 10 That shows your problem isn't the xattr-related issue I've discovered. ![]() If it is by design, it should´nt fill up the pool.ĭataset mos, ID 0,cr_txg 4, 2.87M, 304 objectsĭataset stor2, ID 21, cr_txg 1 3.94M, 10 objects If there is no mechanism to clean this up, a zpool may fill up pretty fast. After some runs, it uses about 2 MB more than before. Each time after the delete the amount of used space increased around 200-400KB. No need for xattribs, and no cleanup with export/import. I can always reproduce this (with latest spl/zfs) by creating 100k files (with a total of 100 GB) in a zpool root dir, and deleting them totally afterwards. I dont know, if this is related, but there maybe a leakage, which is present since years. Rm: remove regular file `/junk/a/junk'? y # setfattr -n user.blah -v 'Hello world' /junk/a/junkĭataset junk/a, ID 40, cr_txg 7, 19.7M, 9 objects # zpool create junk /dev/disk/by-partlabel/junk They'll be pretty obvious because their path will show up as "?." as you can see above. It would be interesting for you to examine the output of zdb -dddd gfs and see what type of objects are layout around when you have some leakage. Your leakage problem may very well be different. I just did a bit more digging and I find b00131d be possibly related to, at least, the leakage that I've discovered. My guess is that the same problem may exist for regular files that have extended attributes. Obviously, un-purged directory objects aren't going to be wasting a lot of space. Object lvl iblk dblk dsize lsize %full typeĭnode flags: USED_BYTES USERUSED_ACCOUNTEDĮven further examination shows me that these objects are still in the ZFS delete queue object (seemingly always object #3). Scan: scrub repaired 0 in 4h6m with 0 errors on Sat Jun 8 00:42:59 2013
0 Comments
Leave a Reply. |