Printer Friendly

Recent work on the Pyrrho DBMS.

BACKGROUND

Since 2004 my personal research activity has mostly focused on the development of a very strict SQL DBMS called Pyrrho. The purpose of this informal paper is to explain this activity and indicate where it may lead.

Like other academics I have also been involved in research projects and supervising research students and naturally have become involved in developing some software for them. Many research students and fellow academics have been kind enough to co-author a published paper with me--or in some cases to acknowledge some earlier input from me by listing me as a co-author even where I have not actually contributed directly to the paper. Thus I can list 12 published papers in the last 4 years, but I was not surprised or very disappointed when I was not included in the current research assessment exercise.

Although Pyrrho has been quite an obsession, it has led to very few published papers. Partly this is because editors of journals and others who know about research in computing dismiss relational DBMS implementation as a mature area, where nobody is doing any research. However, relational DBMS technology, as it is implemented in commercial database products, dates from 1974, and computers have changed quite a bit since then.

My motivation for developing a new DBMS was dissatisfaction with this ancient technology. For example, on current commercial systems, database administrators have the power to "change history" by making untraceable changes, for example to accounting systems, even after they have been finalised and signed off by auditors. More generally, when changes are made it is impossible to find out who made the changes, and what previous value was overwritten. I felt that as Computer Scientists we have a duty to design and build systems where such bad behaviour is made difficult [Crowe 2004]. So in Pyrrho, not only does the database actually consist of the transaction log, so the entire history is visible, but every transaction records details of who made the change, when and to some extent why.

There were other reasons for dissatisfaction: already in 2004 it was clear that pessimistic concurrency control (client-side locking of databases), universally implemented in commercial DBMS products then as now, was unsuited to world-wide information systems. Finally, there was a new international standard SQL2003 (latest version ISO/IEC 9075, 2011) which apparently none of the major players planned to adopt.

Finally, for my taste, no commercial database products have strong enough adherence to the ACID principles of transaction atomicity, consistency, isolation and durability. Even the SQL standard is equivocal by allowing options in a SET TRANSACTION statement that include the ability to read uncommitted data (breaching the isolation principle), and settings in FOREIGN KEY constraints that allow inconsistent data to be committed.

PHYSICAL MEDIA AND TRANSACTION SAFETY

The first steps in the design involved consideration of what hardware changes were relevant to the fundamental DBMS redesign that was in view. Computer memory (RAM) was already much larger than in 1974 so that there was much less need to keep all the DBMS data structures on disk. Initially many critics of Pyrrho's design focussed on the need for similar amounts of memory as on disk, but from the perspective of 2014 these objections seem just foolish. Many real-time databases today reside entirely in RAM, but the main commercial databases continue to implement their data structures on disk, despite the consequential non-atomicity of transactions. Secondly, in 2004 write-once media looked attractive as a means of storing the transaction log. This idea was quickly abandoned as such media did not take off in the market, maybe because as with Pyrrho, mistakes could not be corrected untraceably. Instead, Pyrrho uses a cryptographic lock on the database to detect, and mark as unusable, any database that has been tampered with.

It was essential to Pyrrho's design to allow no compromises on the ACID principles, so some early work dealt with data structure design that would ensure if possible that committing a transaction be as atomic as possible: a single pointer update in memory and a single contiguous disk write request. In practice this has almost been achieved: if only one server and one database is involved, the current design has a single pointer (at the level of the Physical database) that is updated on commit, and of course for a large transaction a single write request might extend to several physical blocks. For safely committing transactions in distributed databases some use of the famous three-phase commit algorithm is unavoidable.

Otherwise the single disk write property is easy: committing a transaction involves appending the record of the transaction to the end of the transaction log: there is a single transaction log per database.

For the in-memory data structures I developed an immutable version of B*-trees so that the head of the tree was all that was needed to give concurrent access to both the new or the old head of the tree while allowing most of the memory to be shared between them. As the transaction proceeds, these structures are updated, but the updates are based on uncommitted data rather than physical records. Each local transaction would simply hold its own pointer to the transient heads of the set of logical database objects, which will be forgotten at the end of the transaction, when the committed database state will be constructed from the data as committed.

DATA REPRESENTATION AND SPEED

With B*-trees the logarithmic behaviour of search is achieved using a tree of ordered chunks of data (traditionally called buckets). These buckets have an arbitrary fixed capacity n>1. In most cases each bucket would occupy around 64n bytes of memory, but the real tradeoff is not between memory and speed, but between the costs of linearly ordering o(n) items within the bucket and the benefit of a larger logarithmic base. Experiments were conducted to see what the best value of n might be, and to my surprise there was very little difference in performance for values of n between 6 and 32.

The next aspect to be considered was the information representation on disk. It was decided that Pyrrho should avoid as far as possible any locale- or platform-specific data representations. The use of octets (bytes) for disk storage looked acceptable, and Unicode (UTF-8 encoding) seemed reasonable. Most commercial database systems allow for large-precision numeric data, and in choosing a fast format for such data I arrived at a very large capacity format (up to 2040 bits) for data on disk, leaving the database engine to optimise arithmetic operations by using 64-bit integers for most data in memory. It is then up to the client-side library to apply regional formats, string collations and settings. Pyrrho adopted the normal practice of using 5-character error condition codes, some of which are specified in the SQL standard, whose explanations the client-side software could localise.

In the initial development it was discovered that for parsing SQL, which is a huge language with hundreds of reserved words, the use of simple LL(1) parsing techniques outperformed more sophisticated compiler technology by several orders of magnitude. It is common even today for people to criticise SQL databases on grounds of speed, but the language itself should not be blamed.

Another innovation was in the traversal of data sets, where a novel partial-evaluation approach to column values was implemented. This provides instant optimisations to complex queries, joins etc. Each result set has an enumerator, of course, but moving to the next row (for example) affects the get-based computation of column values. There is a lot of code inside Pyrrho to streamline the computation of values in such rowsets, and to select the best indexing method, and this is one of the success stories in Pyrrho's implementation speed.

In benchmarking the resulting database using TPC-C (TPC 2001), a speed-up of three orders of magnitude resulted from the adoption of an arbitrary fixed size for messages between client and server. At this stage, Pyrrho's performance proved to be comparable to the commercial databases on a PC (Crowe 2005), and reduced disk activity by a large factor (a factor of 80 for SQL Server). This reduction in disk activity was seen as significant because over time, disk access speed improve only slowly while memory sizes and speed obey Moore's law.

DEVELOPMENT SINCE 2005

It quickly became apparent that while a number of people in the database community found Pyrrho's approach intriguing, there was no likelihood of everyone adopting Pyrrho for commercial activities. Business people at the time did not understand or care about transaction safety, and database people were too anxious and lazy to argue about such things. So very quickly Pyrrho became an environment for testing the cross-over to related aspects of information management.

Some of the features added are in the SQL standard, such as the full range of window functions, the SQL programming language, exception handling, triggers and user-defined types. Naturally Pyrrho went further than commercial databases in implementing these (e.g. trigger actions that distinguish old and new row versions) in a rigorous way, and I treasure some comments from database experts along the lines that they never expected to see such-and-such a feature implemented in their lifetime.

More interesting was the approach to strong typing, where the SQL standard has been criticised for making databases look like programming. My feeling was that this was a good thing, and Pyrrho has always had strong typing. These became a great deal stronger as a result of the RDF movement inside W3C. Eventually I found the overlapping type-conversion rules being introduced by W3C for XML, RDF and SPARQL too counter-intuitive. After fully implementing an RDF triple store, and SPARQL, inside the Pyrrho database engine, Pyrrho has ended up with a very rigorous type system that survived the removal of these features in favour of other developments. It still supports the notion of OWL types.

In fact the SQL standard does not allow the names of database objects to be altered, preferring for the objects to be destroyed and recreated, but from Pyrrho's forensic standpoint renaming is much to be preferred, and is supported to the extent that the code of stored procedures, triggers, default values etc is affected by the renaming of objects. Pyrrho is so fast at processing SQL that such code is always interpreted, never compiled.

Temporal databases have long been an interesting area, and Pyrrho implemented some intrinsic functions (NEXT, LAST, AT) and the concept of TEMPORAL JOIN to optimise processing of temporal queries. This is fast and effective, but is rather different from the VERSIONING concept introduced in the 2011 version of SQL, which of course Pyrrho has also implemented and extended. The new SQL approach is more declarative: Pyrrho's temporal tables are automatic and maybe more intuitive.

Another interesting area was the idea of authority. As mentioned above, Pyrrho's transactions have always stored the identity of the user committing the transaction along with the authority being used. The corresponding SQL concept is Role. In SQL users are granted roles, and one of these can become the session role. So Pyrrho's authority concept is exactly the same as SQL's session role. However, in recent versions of Pyrrho, changes to the names of database objects are only seen in the role that changes them.

Pyrrho's response to the Java Enterprise Edition was to implement the Java Persistence API inside Pyrrho, and this exploration was very interesting, as it highlighted the way that the data model emerged as declarations in applications rather than being intrinsic to the database schema. This aspect is also a feature of Microsoft's entity framework and language-integrated query (LINQ). Implementing this rigorously meant that different concurrent transactions would have different data models for the same database, which seemed dangerous and burdensome. Pyrrho at one time had full implementations of both JPA and LINQ, but eventually I removed all of these aspects in favour of a fully schema based approach to data modelling based on roles which I believe could be of theoretical interest. While adding such "metadata" aspects to SQL syntax, in particular distinguishing entity tables, Pyrrho also allowed metadata additions to enhance automated data visualisation.

There have been some other changes inspired by the open data movement, such as recording the provenance of data imported into databases, and marking a database as curated, so that future changes to it are prevented. Needless to say, these developments merely ensure that nobody wants to commit to using Pyrrho.

THE JOURNEY CONTINUES

In 2013 I returned to the implementation of distributed and partitioned databases and came up with a more rigorous implementation of both. Current work is considering massively-partitioned databases and the use of parallel searching similar to map-reduce algorithms.

The development of Pyrrho is likely to continue for a few years yet. The complete sources have been publicly available since 2007, and it is gratifying to see evidence of some of my code being re-used in other projects and even a few products.

It is a pleasure to acknowledge the moral support provided by colleagues at UWS, notably Thomas Connolly, Carolyn Begg, and Alistair McMonnies, and in the DBTech project, especially Martti Laiho and Fritz Laux.

References

Crowe, Malcolm (2004): Information, Knowledge and the Evolution of Computing, Proceedings of the 1st International Scientific Conference on Information Technology and Quality, Athens, University of Paisley, pp 9-13 ISBN 1 903978 19 X

Crowe, Malcolm K (2005): Transactions in the Pyrrho Database Engine, IASTED International Conference on Databases and Applications, part of the 23rd Multi-Conference on Applied Informatics, Innsbruck, Austria, February 14-16, 2005

SQL2011: ISO/IEC 9075:2011 Information Technology--Database Languages--SQL--Part 2: Foundation (SQL/Foundation);--Part 4: Persistent Stored Modules;--Part 14::XML-Related Specifications (SQL/XML) (International Standards Organisation, 2011)

TPCC: An on-line Transaction Processing Benchmark, Transaction Processing Performance Council, San Francisco.

Malcolm Crowe

University of the West of Scotland, UK

malcolm.crowe@uws.ac.uk
COPYRIGHT 2014 University of the West of Scotland, School of Computing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:database management systems
Author:Crowe, Malcolm
Publication:Computing and Information Systems
Geographic Code:1USA
Date:Feb 1, 2014
Words:2313
Previous Article:An IEW-MOORA approach for solving MCDM problems.
Next Article:Chaotic differential evolution optimization.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters