Custom databases for the USA

We make and support custom database projects - running where and how you need them.

No matter your data needs, we know databases, storage, and reporting, on every scale.

As we’ve decades of experience *using* data in the programs and reports we’ve written, we know how to chose the right database for your projects, and how to design the tables and indexes to best support the project. And as system specialists we know how to install, tune, manage, and safeguard your databases and their information.

On the scale of cell phones and small computer projects: SQLite,of course.

On the scale of local personal computers or office networks: the classic PC based names you probably know: MYSQL, SQL Server from Microsoft, Oracle - and for smaller projects, MS Access.

Some of these same names are to be found further up the infrastructure size scale, running on servers in data centers or the cloud – but for MS Access, which should stay on your local office network or server.

How large can we go here? Our favorite, MYSQL, can store up to 256 TB in a table. ( that "TB" stands for terabyte, or about 1024 gigabytes. This translates to about 200,000 5-minute songs; 310,000 pictures; or 500 hours worth of movies). This is a lot of data, but there are other limits here, and we wouldn’t suggest using MYSQL for this scale - such as the maximum file size supported by the computer, as well as the efficiencies of the database.

In other words, for most needs, a classic database is fine. Indeed for hundreds of applications, both on local networks and in the cloud, we’ve found MYSQL ( it’s free) or SQL Server or Oracle to be good.

But – if you have massive amounts of data – and if speed is vital. we need to examine the next scale. And this scale is defined by the kind of data you’re storing, as well as by where.




Let’s begin with noting the rise of a new kind of database: “NOSQL”. This is defined as having no fixed field definitions, and perhaps the best known example is MongoDB. In a nutshell: we’ve worked with it and find it to be generally a bad idea for most business uses. Asking it questions, queries, can be difficult, and we find its purported benefits (you can start building without defining what you are storing) to be counterproductive. We believe you *should* know what you wish to store before you build!

Back to the mainstream of databases at scale: There are other structural limits to classic databases; for example: MySQL can only use one CPU core per query, whereas newer creations, purpose built for large data, such as Spark, can use all cores on all cluster nodes.

Note the use of the term “cluster node”; the idea here is that of moving sections, “shards”, of the data to multiple machines, and then having a system of being able to query the entire cluster with a single query. In other words we now have an “operating system” built into the database, such that it can support this larger scale of multiple machines.

Our favorite system of this nature is Spark, created by Apache, who’s earlier incarnation Hadoop long held the title of most successful of the large database systems

Continuing our overview, all of the larger cloud systems ( Amazon, Google, Microsoft) have proprietary large database system offerings in their cloud ecosystems.

Here you must ask yourself - do you need this scale? The advantages can be both the supported size, and automated scaling of processing power and storage. You’ll also benefit from offloading the infrastructure – until, perhaps, something goes wrong – and you find you have limited choices for data recovery. We can help with both the decision making and the later management of these systems.