Jesus David
CompanyMagaya Corporation
Contact personJesus David Rodriguez
ProductMagaya Cargo System

My name is Jesus David I'm writing you on behalf of Magaya Corporation Development team. We started using your GOODS database back in 1999 when we started the development of our Magaya Cargo System. We want to thank you again for the opportunity of using your database because it has been a key factor in our development. We use your default database server wrapped in a windows user interface and your default client with very little modifications to the MOP. At this moment we have approximately 300 server installation of the GOODS database in 20 different countries and more than 1000 users (clients of the database). You can visit out web site to find more information about our company and services. You can also download the Magaya Cargo System from the following link if you want to see it. In general the database works very fast, does not requires administration at all and is very reliable. The backup on the fly is one of the most useful features.

At the beginning we experience some problems in the implementation but after two or three month of testing and some small adjustments we got it working. Most of these adjustments we sent them to you in the year 2000 others were done to versions previous to the one you had at the time and we didn't want to bother your with problems out of date.

Jorg Hobelmannr
Dipl.-Ing. Joerg HobelmannPhone: +49-531-2359-246
Aerodata Systems GmbHFax: +49-531-2359-158
Flight Inspection
Hermann-Blenk-Strasse 36
D-38108 Braunschweig, Germany

2 years ago, we decided to use GOODS as database for a flight inspection system on board of a Global Express airplane (Bombardier/Canada). It has now come to delivery and GOODS has always performed well without any problems. It's stability and crash recovery has been very valuable and I would like to express my regards to you for your fast help whenever we had questions.

GOODS is part of the companies new flight inspection software and has done a great job.

Thomas Winter
I have been using the Goods database for two years, with deployments in two fortune 500 companies. I am currently on the third revision of my "basic system" after finding what works really well, versus what just works well. Here are some thoughts and pointers.

First, the Goods database works well right off the bat, using the straight forward coding examples including in the kit, especially the University sample, for smaller scale systems. I was previously using POET for several years, but ran into liscensing issues on the order of tens of thousands of dollars. I switched to Goods, and will not look back. However, for large deployments defined as large databases on the order of millions to billions, there is a significant amount of work to be done.

The most important consideration is the basic data structure, or rather data heirarchy, and what information is to be gleaned from the inherent data heirarchary; this is the most critical design element to understand. The second important consideration is determining the collection type for your data. Such as an array, b-tree, etc. Goods is the only OODB I know of which includes these collection sets as part of the functioning base system. And they do work. However, they each have trade offs between effeiciency in finding data, versus allocation space.

Renzo Tomaselli
Being a GOODS user too, since almost three years and nearly full time, I'd like to throw in a couple of bits about it. First of all, code is very stable and well written (I love those ref<> tricks and the mop machinery behind it). And Konstantin support is very good. I adopted GOODS to design a distributed document management system for a customer of mine, although after applying some significant changes. First of all, I use a CORBA for all communications infrastructure, so I had to replace Konstantin SAL through equivalent CORBA models, one for client and another for server. Then I never used the original GOODS server: I deployed a generic storage server which handles any number of storages for any number of databases.

Both client and server run on Win family, Solaris and Linux. The database model is a hierarchical structure of items and folders, where folders are ordered collections of items, implemented as a kind of btree. Items are collections of <name,value>> properties, where "value" can be any CORBA value (from IDL) marshalled down to a stream of bytes. Values can be pointers to items and folders or even to foreign CORBA objects. This way any topology can be developed, not just a tree. The tree is just the "physical" layout and it can develop down to any depth. Then there are indexes. This was a big work since it required to develop quite a number of data structures, all modelled after some kind of btree. Variable length keys were the hard beast together with handling inverted lists for full-text indexing. Indexes are clusteres into groups which can be placed onto any item, and they collect all items in the subtree below, down to a predefined depth. This step is performed through mapping rules defining which properties are good candidates to have current item entering the index, optionally through some converter plug-in for transforming property value into something acceptable by the index definition. Then a SELECT-like grammar for queries completed the picture. I deliberately chose to keep indexes apart from items so that data is data and it can be accessed by direct navigation, while indexes are handled by queries, thus providing in-memory collections of items which can be retrieved on demand. This is a big difference as compared to RDBMS where indexes are almost hidden, while programmers still must know somehow where they are to get reasonable answer time.

The overall performance is excellent. We specifically tuned the system to get high thoughput for COLD-style operations (data-pumping). E.i., we soon discovered that turning GC off is a must in such cases, as well as providing a secondary cache to cluster all page writes together.

I actually use a modified version of GOODS since several Konstantin assumptions do no match my needs: e.i. the server exits whenever something very bad occurs and this is not acceptable for industrial applications. Client threads CAN deadlock while accessing objects, since this depends on end-user operations and not just on application design (as assumed by Konstantin). So I needed a deadlock hunter with retry. The same for storage locking in case of single storage db (multiple storages must still rely on timeout, then return). Exceptions had to be introduced everywhere too, together with an associated transaction rollback.

Then mops: Konstantin assumed a fixed, application-driven schema for all mops, while I prefer to leave this choice (pessimistic/optimistic) to the programmer. The same holds true for transactions: I introduced explicit transaction control, while leaving the implicit (automatic) one in place. A further modification requires the capability to add storages on the fly, while the original structure accounted for a fixed, predefined set per db. I'm still looking forward about how long a server could keep a storage up without problems.

However the overall result and performance level are very exciting.

I will be glad to receive your response about GOODS