|Contact person||Jesus David Rodriguez|
|Product||Magaya Cargo System|
At the beginning we experience some problems in the implementation but after two or three month of testing and some small adjustments we got it working. Most of these adjustments we sent them to you in the year 2000 others were done to versions previous to the one you had at the time and we didn't want to bother your with problems out of date.
|Dipl.-Ing. Joerg Hobelmann||Phone: +49-531-2359-246|
|Aerodata Systems GmbH||Fax: +49-531-2359-158|
|Flight Inspection Systemsfirstname.lastname@example.org|
|D-38108 Braunschweig, Germany|
GOODS is part of the companies new flight inspection software and has done a great job.
First, the Goods database works well right off the bat, using the straight forward coding examples including in the kit, especially the University sample, for smaller scale systems. I was previously using POET for several years, but ran into liscensing issues on the order of tens of thousands of dollars. I switched to Goods, and will not look back. However, for large deployments defined as large databases on the order of millions to billions, there is a significant amount of work to be done.
The most important consideration is the basic data structure, or rather data heirarchy, and what information is to be gleaned from the inherent data heirarchary; this is the most critical design element to understand. The second important consideration is determining the collection type for your data. Such as an array, b-tree, etc. Goods is the only OODB I know of which includes these collection sets as part of the functioning base system. And they do work. However, they each have trade offs between effeiciency in finding data, versus allocation space.
Both client and server run on Win family, Solaris and Linux. The database model is a hierarchical structure of items and folders, where folders are ordered collections of items, implemented as a kind of btree. Items are collections of <name,value>> properties, where "value" can be any CORBA value (from IDL) marshalled down to a stream of bytes. Values can be pointers to items and folders or even to foreign CORBA objects. This way any topology can be developed, not just a tree. The tree is just the "physical" layout and it can develop down to any depth. Then there are indexes. This was a big work since it required to develop quite a number of data structures, all modelled after some kind of btree. Variable length keys were the hard beast together with handling inverted lists for full-text indexing. Indexes are clusteres into groups which can be placed onto any item, and they collect all items in the subtree below, down to a predefined depth. This step is performed through mapping rules defining which properties are good candidates to have current item entering the index, optionally through some converter plug-in for transforming property value into something acceptable by the index definition. Then a SELECT-like grammar for queries completed the picture. I deliberately chose to keep indexes apart from items so that data is data and it can be accessed by direct navigation, while indexes are handled by queries, thus providing in-memory collections of items which can be retrieved on demand. This is a big difference as compared to RDBMS where indexes are almost hidden, while programmers still must know somehow where they are to get reasonable answer time.
The overall performance is excellent. We specifically tuned the system to get high thoughput for COLD-style operations (data-pumping). E.i., we soon discovered that turning GC off is a must in such cases, as well as providing a secondary cache to cluster all page writes together.
I actually use a modified version of GOODS since several Konstantin assumptions do no match my needs: e.i. the server exits whenever something very bad occurs and this is not acceptable for industrial applications. Client threads CAN deadlock while accessing objects, since this depends on end-user operations and not just on application design (as assumed by Konstantin). So I needed a deadlock hunter with retry. The same for storage locking in case of single storage db (multiple storages must still rely on timeout, then return). Exceptions had to be introduced everywhere too, together with an associated transaction rollback.
Then mops: Konstantin assumed a fixed, application-driven schema for all mops, while I prefer to leave this choice (pessimistic/optimistic) to the programmer. The same holds true for transactions: I introduced explicit transaction control, while leaving the implicit (automatic) one in place. A further modification requires the capability to add storages on the fly, while the original structure accounted for a fixed, predefined set per db. I'm still looking forward about how long a server could keep a storage up without problems.
However the overall result and performance level are very exciting.
I will be glad to receive your response about GOODS