Kinook Software Forum

Go Back   Kinook Software Forum > Ultra Recall > [UR] General Discussion
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Rate Thread Display Modes
  #1  
Old 06-10-2021, 12:58 PM
Spliff Spliff is online now
Registered User
 
Join Date: 04-07-2021
Posts: 212
Reasonable database sizes

I originally had about 50,000 items in my main UR database, with 4 secondary databases. I then imported one of these into my main database, so I got about 70,000 items in there, size 1.5 GB (with item rich text 5.0 GB) (It's a little bit less since UR gives the numbers in bytes.)

Everything continued to work smoothly. Speaking of a "traditionally-storing" WD Red HDD here (and i7 with 16GB of work memory); ALL findings of mine may be VERY different if you have your databases on an SSD:

I then imported another of my secondary databases, with about 55,000 items, for a total of 123,000 items, making it 2.12 GB and an item rich text of 9.43 GB.

This seemed to work quite acceptable - but not really well anymore - for a day or two, then (without further additions except for some dozen new items in my ordinary workflow) I got lags of many seconds and even minutes, even for very simple things, e.g. creating a new item, moving an item, etc.

Thus I deleted the imported database from the main database and did a "Compact and Repair", and, with just under 75,000 items, it's smooth again.

I tried to "blow up" my main database in this way in order to get the benefit of inter-linking items or even sub-trees (i.e. the parent items of others) within my main database, and while linking to other databases is technically possible, links within the same database work much more smoothly, e.g. I had hoped to do inter-links between special software items and the subjects where I use that software, for "hints and tricks" and such to be readily available where I need them, instead of them being discarded into my "IT" database.

I now do copies of the original items instead, but this implies de-syncing when I "update" them "here", instead of "there", thus my hope for the "very big" database.

Other users with similar problems - most will not encounter my size problems though - could load the secondary database automatically, together with their main database, and then just work on the - linked - "original" item over there.

This being said, it can be read in the web - may be true or not - that some "competitors" become unstable in the (higher) 4-digit item range, and the above at least proves there is NO size problem with 75,000 items even on a HDD with UR.
Reply With Quote
  #2  
Old 06-28-2021, 01:44 PM
kinook kinook is online now
Administrator
 
Join Date: 03-06-2001
Location: Colorado
Posts: 6,034
I have a database with about 40,000 items, .75GB in size. In this database, I duplicated all of the items 4 times, so it now has 165,000 items, 2.9GB in size. For queries with only a few results, I could not detect a difference in search speed (typically instant). For queries with hundreds of results, the speed difference was proportionate to the # of matching items (which is 4x more in the second database). It doesn't make much sense that searches would suddenly get slower without anything changing. Could some options have changed?
Reply With Quote
  #3  
Old 07-06-2021, 03:53 PM
cnewtonne cnewtonne is online now
Registered User
 
Join Date: 07-27-2006
Posts: 519
This is my largest database and performance has been stellar.
Attached Images
 
Reply With Quote
  #4  
Old 07-07-2021, 02:48 AM
Spliff Spliff is online now
Registered User
 
Join Date: 04-07-2021
Posts: 212
Sorry, Kyle, I hadn't seen your post but just now, together with cnewtonne's.

As said over several threads, I got all sorts of problems with too big a database, not only for search, but some days ago, I finally defragmented my HDD - I must admit I hadn't thought of defragmenting it for a very long time, and with a 1GB- or even bigger database, it's obvious that fragmentation will almost certainly become a real problem, so I've come back here today in order to mention this very important fact, which obviously puts into perspective all of the above.

After the defragmentation, I haven't seen any real change though (with my now 60,000 items and lesser databases), the greying-out of cut items (^x) having been, gradually (!), come back even before, depending on the "part" of the database, and around 15 seconds' "wait" upon loading having persisted beyond the defragmentation (i.e. it's as before).

I discovered that most defragmentation tools - even most of the free ones! - allow for defragmentation of just specific folders and/or files, so after a big defragmentation, it will probably be sufficient to just defragment your UR databases (i.e. *.urd in some specific folder), again and again.

Thus functionality is quite hidden, in most tools, and just available indirectly, so for example, in my O&O Defrag Pro, it's in Settings (button, not menu), then Files, then Excluded Files (!) where I put D:\*.*, then "Files that must be defragmented", in which I then put D:\UR\*.*; in other such tools, it's similar, judging from some screenshots / "manual" info.

Notable paid tools are O&O Defrag Pro ($30 for 1), PerfectDisk Pro ($40 for 3, $60 for all home), UltraDefrag ($20 for 3, $40 for all home, not to be mixed up with UltimateDefrag ($30, from the interface you would think it's W95); all of them allow for this individual defragmentation;

Diskeeper is finished, it's DymaxIO now, now rent, from $50 per month (!) onward (no joke, that's for 5 PCs, but you have to pay for them if you need them or not);

notable free tools which, from my web information, also allow for individual defragmentation, are GlarySoft Disk SpeedUp and Defraggler; I'm not sure about Auslogics Disk Defrag though; there's also Puran Defrag (also with that functionality, but which had been deemed "dangerous for your files", but to say it all, that info is from 2012...).

Ideally, SQLite software like UR should "reserve" some HDD space "right after" its current HDD space, e.g. if there is a 1 GB SQLite file, the "next" 500 MB or so should not be marked as "free" for the NTFS file system; technically, it seems that should even be possible, but then only by already writing dummy data there, which will then, gradually, be replaced by real data, and ideally, the SQLite application would rewrite the whole data, again "reserving" additional space, on another place on the HDD when the "reserved" space is going out.

As for such functionality in any defragmentation tool, I didn't find any hint to such a thing, but I suppose that would be technically impossible from that side; as said, from the SQLite application side, it would be technically possible indeed but be deemed somewhat "exotic".
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



All times are GMT -5. The time now is 05:54 PM.


Copyright © 1999-2023 Kinook Software, Inc.