La sabiduría no vale la pena si no es posible servirse de ella para inventar una nueva manera de preparar los garbanzos.(Wisdom isn't worth anything if you can't use it to come up with a new way to cook garbanzos). | |
—A wise Catalan in "Cien años de soledad" Gabriel García Márquez |
The goal of PyTables is to enable the end user to manipulate easily data tables and array objects in a hierarchical structure. The foundation of the underlying hierarchical data organization is the excellent HDF5 library (see []).
It should be noted that this package is not intended to serve as a complete wrapper for the entire HDF5 API, but only to provide a flexible, very pythonic tool to deal with (arbitrarily) large amounts of data (typically bigger than available memory) in tables and arrays organized in a hierarchical and persistent disk storage structure.
A table is defined as a collection of records whose values are stored in fixed-length fields. All records have the same structure and all values in each field have the same data type. The terms fixed-length and strict data types may seem to be a strange requirement for an interpreted language like Python, but they serve a useful function if the goal is to save very large quantities of data (such as is generated by many data acquisition systems, Internet services or scientific applications, for example) in an efficient manner that reduces demand on CPU time and I/O.
In order to emulate in Python records mapped to HDF5 C structs PyTables implements a special class so as to easily define all its fields and other properties. PyTables also provides a powerful interface to mine data in tables. Records in tables are also known in the HDF5 naming scheme as compound data types.
For example, you can define arbitrary tables in Python simply by declaring a class with name field and types information, such as in the following example:
class Particle(IsDescription): name = StringCol(16) # 16-character String idnumber = Int64Col() # Signed 64-bit integer ADCcount = UInt16Col() # Unsigned short integer TDCcount = UInt8Col() # unsigned byte grid_i = Int32Col() # integer grid_j = IntCol() # integer (equivalent to Int32Col) class Properties(IsDescription): # A sub-structure (nested data-type) pressure = Float32Col(shape=(2,3)) # 2-D float array (single-precision) energy = FloatCol(shape=(2,3,4)) # 3-D float array (double-precision)
You then pass this class to the table constructor, fill its rows with your values, and save (arbitrarily large) collections of them to a file for persistent storage. After that, the data can be retrieved and post-processed quite easily with PyTables or even with another HDF5 application (in C, Fortran, Java or whatever language that provides a library to interface with HDF5).
Other important entities in PyTables are the array objects that are analogous to tables with the difference that all of their components are homogeneous. They come in different flavors, like generic (they provide a quick and fast way to deal with for numerical arrays), enlargeable (arrays can be extended in any single dimension) and variable length (each row in the array can have a different number of elements).
The next section describes the most interesting capabilities of PyTables.
PyTables takes advantage of the object orientation and introspection capabilities offered by Python, the HDF5 powerful data management features and numarray flexibility and high-performance manipulation of large sets of objects organized in grid-like fashion to provide these features:
Support for table entities: You can tailor your data adding or deleting records in your tables. A large number of rows (up to 2**62), i.e. much more than will fit into memory is supported as well.
Multidimensional and nested table cells: You can declare a column to consist of general array cells as well as scalars, which is the only dimensionality allowed the majority of relational databases. You can even declare columns that are made of other columns (of different types), which is known as struct types.
Indexing support for columns of tables: Very useful if you have large tables and you want to quickly look up for values in columns satisfying some criteria.
Support for numerical arrays: NumPy (see []), Numeric (see []) and numarray (see []) arrays can be used as a useful complement of tables to store homogeneous data.
Enlargeable arrays: You can add new elements to existing arrays on disk in any dimension you want (but only one). Besides, you can access to only a slice of your datasets by using the powerful extended slicing mechanism, without need to load all your complete dataset in-memory.
Variable length arrays: The number of elements in these arrays can be variable from row to row. This provides a lot of flexibility when dealing with complex data.
Supports a hierarchical data model: Allows the user to clearly structure all the data. PyTables builds up an object tree in memory that replicates the underlying file data structure. Access to the file objects is achieved by walking through and manipulating this object tree.
User defined metadata: Besides supporting system metadata (like the number of rows of a table, shape, flavor, etc.) the user may specify its own metadata (as for example, room temperature, or protocol for IP traffic that was collected) that complement the meaning of his actual data.
Ability to read/modify generic HDF5 files: PyTables can access a wide range of objects in generic HDF5 files, like compound type datasets (that can be mapped to Table objects), homogeneous datasets (that can be mapped to Array objects) or variable length record datasets (that can be mapped to VLArray objects). Besides, if a dataset is not supported, it will be mapped into a special UnImplemented class (see 4.14), that will let the user see that the data is there, although it would be unreachable (still, you will be able to access the attributes and some metadata in the dataset). With that, PyTables probably can access and modify most of the HDF5 files out there.
Data compression: Supports data compression (using the Zlib, LZO and bzip2 compression libraries) out of the box. This is important when you have repetitive data patterns and don't want to spend time searching for an optimized way to store them (saving you time spent analyzing your data organization).
High performance I/O: On modern systems storing large amounts of data, tables and array objects can be read and written at a speed only limited by the performance of the underlying I/O subsystem. Moreover, if your data is compressible, even that limit is surmountable!
Support of files bigger than 2 GB: PyTables automatically inherits this capability from the underlying HDF5 library (assuming your platform supports the C long long integer, or, on Windows, __int64).
Architecture-independent: PyTables has been carefully coded (as has HDF5 itself) with little-endian/big-endian byte orderings issues in mind. So, you can write a file on a big-endian machine (like a Sparc or MIPS) and read it on other little-endian machine (like an Intel or Alpha) without problems. In addition, it has been tested successfully with 64 bit platforms (Intel-64, AMD-64, PowerPC-G5, MIPS, UltraSparc) using code generated with 64 bit aware compilers.