#1
|
|||
|
|||
Empty recycle bin and bad performance
Hi Kinook:
I have 1409 items in my Recycle Bin. I initiated the "Empty Recycle Bin" command. After 40 minutes with the "Please Wait" dialogue displayed, I tried cancelling the operation. The appropriate message appeared, but did not go away even after another 40 minutes. Obviously I could not use URp during that time. URp was running on a dual core machine with 2Gb ram. The database is 353Gb. I had to kill UR and restart. The program froze and I had to kill it again. I noticed a file with the database name and urd-journal extension and deleted it. UR fired right up and loaded my database with the recycle bin with the 1409 items still there. Emptying the recycle bin failed again and I went through the same steps to re-start the program. By the way, ALL OF MY TOOLBAR CUSTOMIZATIONS WERE LOST! I tried a database repair and got the error "database disk image is malformed." So I created a new database and copied the items from the "bad" one to the empty one. Guess what? I initiated this process 4 hours and 50 minutes ago and am still waiting. I also cannot leave the computer unattended because the "Switch to" dialogue pops up from time to time. I click the button "switch to" and the dialogue goes away. I assume the copying process resumes until the next box appears. Can you PLEASE fix these performace issues I am experiencing? Or, am I in such a minority that it is not worth addressing (I will understand if this is the case and will have to use some other program). Thank you. Jon Last edited by Jon Polish; 06-19-2007 at 09:51 AM. |
#2
|
||||
|
||||
353Gb ~ 40GB database???
now I understand you speed concerns from another post by Jon Polish "... By the way, PLEASE do not implement any further changes or additions to the program without first addressing its speed (and yes, I am now running it on a dual core, but as I have described my needs elsewhere, I find UR's speed positively maddening) ..." |
#3
|
|||
|
|||
Hi Quant:
As I understand it, the size of a UR database should not significantly effect its performance. That may be the theory, but practice may be different. I use two other information managers which do not have anything near these performance issues despite having even larger databases. Jon |
#4
|
||||
|
||||
I'm sure there must be a significant performance issues once you go beyond your RAM size, probably not 40 minutes for operation but ... maybe I'm wrong, let's hear from Kinook.
Just read several days ago that SQLite performance is quite dependent on tables it's operating on, but it was at table sizes of about 500.000 . If you were deleting only 1500 items, it shouldn't be a problem. The thing is, that UR has to do a lot of cleaning, ... and reading/writting to HDD ... hmmm ... I'm curious ... good that someone put UR to the real test |
#5
|
|||
|
|||
Here is my report on the repair of the database (the procedures I took are described above).
The total time to copy the items from the old, damaged and irreparable (at least what could be salvaged by UR's repair utility) database was 8 hours 58 minutes. I understand that keywording, etc. takes place in the background, but this is ridiculous. Adding insult to injury were two additional problems. First, UR could not be used during this period. The second problem was the very frequent, but erratic pop-up (switch to or retry) that needed constant attention, lest the copying process would not proceed further. I could dismiss this by either clicking switch to or retry. I like what UR is supposed to be able to do, but I have just about had enough. The reality is that when pushed beyond what I think most users use it for, but still very far below its stated limitations, UR simply falls flat on its face. Jon |
#6
|
||||
|
||||
I'd like to see the screenshot of "Database Properties", when you go to File -> Properties?
You say your DB is 40GB, what is the Stored document size, Item Rich text size, ... ? My db is about 50 MB, but Item rich text size about 5 times more. If I used that for your example, that would make about 200 GB of data (dont know how much of that is index and keywords itself) Now, just to copy that ... about 1-2 hours depending on HDD speed indexing ... (SearchInform, high speed about 15-30 Gb/hour) ... Really hard to say what is actually copied and what is indexed, but 9 hours doesn't seem that unreasonable after all ... |
#7
|
|||
|
|||
Quant is right (as usual)...from the Sqlite documentation:
"...when database begin to grow into the multi-gigabyte range, the size of the bitmap can get quite large. If you need to store and modify more than a few dozen GB of data, you should consider using a different database engine." |
#8
|
|||
|
|||
Quant, I will try to send you a screen cap tomorrow. The database is at work.
Bill, elsewhere in the forums, http://www.kinook.com/Forum/showthre...=&threadid=709 Kinook states this. "What is the maximum size of an Ultra Recall database? The physical size limit is 2 terabytes (1 terabyte = approximately 1,000 gigabytes, http://en.wikipedia.org/wiki/Terabyte). You may also be limited by free disk space and the maximum file size supported by your file system (2GB for FAT, 4GB for FAT32, > 2TB for NTFS). Another consideration is that some file formats (for instance, some CD-R formats) can't handle files larger than 2GB, so how you backup your data may be a factor in how large of database you wish to maintain. We've successfully tested databases upwards of 5GB and regularly use databases larger than 300MB." Based on this information and my successful trial of UR 2.x (albeit with a much smaller database) I purchased 2.0 and eventually upgraded to 3.x. My database is far from Kinook's stated limits, so perhaps they need to change this claim to be more in line with SQLite's documentation. Jon |
#9
|
|||
|
|||
Quote:
As far as 9 hours seeming reasonable, I respectfully feel otherwise. First of all two of my other information managers (one of which is indexed) manages similar tasks in no longer than 30 minutes. A relational database program I use can accomplish identical data manipulation in less than 10 minutes. While doing their tasks, all three do not significantly impact the performance of other programs as does UR. Perhaps this is reasonable when using SQLite, but it is a problem for me (and again, maybe I am alone with this complaint and I accept that this will not be resolved for the sake of one user). Making matters worse was, during this operation, the frequent appearance of the "Switch To" box. Without continued user intervention the copying process would not have been able to complete. Besides, as I indicated above, UR is supposedly able to deal with databases far larger than mine. I shudder to think how long I would have to wait for a database of that size. My 211th birthday? I like the thoughtful design and intent that has gone into UR and I don't want to give the impression that I am down on the program. It may not be for me now, but if the problems I experience would ever be addressed - WOW! The screen cap I promised is attached. Jon Last edited by Jon Polish; 06-20-2007 at 09:16 AM. |
#10
|
|||
|
|||
The main reason that permanent deleting can take an extended time period (as well as some other "data manipulation" functions) is the full Undo-Redo capabilities of Ultra Recall. When you empty the Recycle Bin of 1500 items storing 40GB of data, all of this data being deleted is "moved" into the Undo "temporary tables", so Undo/Redo is available.
Perhaps emptying the Recycle Bin should not be undoable (this would likely speed up this process significantly). You mention the Empty Recycle Bin and Compact & Repair functions as being slow, I would ask about the performance of normal usage functions (adding, navigating, viewing, searching, etc). Ultra Recall has been significantly optimized for these more routine functions, and I imagine that these functions are at least as efficient as competing applications. Regarding the published "limits" of Ultra Recall - we really didn't envision users trying to stored hundreds of gigabytes of information in a Personal Information Manager application, but as I mentioned before - I would expect that normal functions would continue to have reasonable performance (view, search, navigate, etc) even when storing these enormous amounts of data... Finally, unless it is imperative that the file actually shrink in size (ie, you are not going to add more data to the file), compacting really is not necessary - the unused space will be reclaimed by the new items anyway. |
#11
|
||||
|
||||
Hang on a second! You wrote at the beginning that
"The database is 353Gb." - 1 B (byte) is 8 bits, so that's about 40GB. But your db is only 310 MB !!! (seeing that your document size is about half of it, I assume the other is stored keywords and index) http://en.wikipedia.org/wiki/Giga_byte So that makes the conversation completely different!!! Kinook, please reread this thread, it really shouldn't take 8 hours to manipulate 5000 items database. As you mentioned, it might be the undo of Recycle Item thing ... Last edited by quant; 06-20-2007 at 10:32 AM. |
#12
|
|||
|
|||
Quant:
You are correct, I should have expressed the size in Mb not Gb. My apologies. However that only magnifies the problem. The correct size of the database is now several orders smaller, yet the performance is horrible. Jon |
#13
|
|||
|
|||
Quote:
Navigating and searching are fine. Viewing can sometimes be slow, but this is only for very large chunks of data, so I make allowances. Besides, some (but not all) of your competitors would display large chunks of data slowly too. Jon Last edited by Jon Polish; 06-20-2007 at 10:50 AM. |
#14
|
||||
|
||||
Jon,
just a note, you're probably still mixing capital B (bytes) for small letter b (bit). That makes a difference of about one order cause 1B = 8b ... |
#15
|
|||
|
|||
Quote:
Jon P.S. By the way, I greatly appreciate interest in my problem. Thank you Quant. |
|
|