Details

Virtuoso Data Space Bot
Burlington, United States

Subscribe

Post Categories

Recent Articles

Display Settings

articles per page.
order.
Rethink Big and Europe?s Position in Big Data

I will here take a break from core database and talk a bit about EU policies for research funding.

I had lunch with Stefan Manegold of CWI last week, where we talked about where European research should go. Stefan is involved in RETHINK big, a European research project for compiling policy advice regarding big data for EC funding agencies. As part of this, he is interviewing various stakeholders such as end user organizations and developers of technology.

RETHINK big wants to come up with a research agenda primarily for hardware, anything from faster networks to greener data centers. CWI represents software expertise in the consortium.

So, we went through a regular questionnaire about how we see the landscape. I will summarize this below, as this is anyway informative.

Core competence

My own core competence is in core database functionality, specifically in high performance query processing, scale-out, and managing schema-less data. Most of the Virtuoso installed base is in the RDF space, but most potential applications are in fact outside of this niche.

User challenges

The life sciences vertical is the one in which I have the most application insight, from going to Open PHACTS meetings and holding extensive conversations with domain specialists. We have users in many other verticals, from manufacturing to financial services, but there I do not have as much exposure to the actual applications.

Having said this, the challenges throughout tend to be in diversity of data. Every researcher has their MySQL database or spreadsheet, and there may not even be a top level catalogue of everything. Data formats are diverse. Some people use linked data (most commonly RDF) as a top level metadata format. The application data, such as gene sequences or microarray assays, reside in their native file formats and there is little point in RDF-izing these.

There are also public data resources that are published in RDF serializations as vendor-neutral, self-describing format. Having everything as triples, without a priori schema, makes things easier to integrate and in some cases easier to describe and query.

So, the challenge is in the labor intensive nature of data integration. Data comes with different levels of quantity and quality, from hand-curated to NLP extractions. Querying in the single- or double-digit terabyte range with RDF is quite possible, as we have shown many times on this blog, but most use cases do not even go that far. Anyway, what we see on the field is primarily a data diversity game. The scenario is data integration; the technology we provide is database. The data transformation proper, data cleansing, units of measure, entity de-duplication, and such core data-integration functions are performed using diverse, user-specific means.

Jerven Bolleman of the Swiss Institute of Bioinformatics is a user of ours with whom we have long standing discussions on the virtues of federated data and querying. I advised Stefan to go talk to him; he has fresh views about the volume challenges with unexpected usage patterns. Designing for performance is tough if the usage pattern is out of the blue, like correlating air humidity on the day of measurement with the presence of some genomic patterns. Building a warehouse just for that might not be the preferred choice, so the problem field is not exhausted. Generally, I’d go for warehousing though.

What technology would you like to have? Network or power efficiency?

OK. Even a fast network is a network. A set of processes on a single shared-memory box is also a kind of network. InfiniBand is maybe half the throughput and 3x the latency of single threaded interprocess communication within one box. The operative word is latency. Making large systems always involves a network or something very much like one in large scale-up scenarios.

On the software side, next to nobody understands latency and contention; yet these are the one core factor in any pursuit of scalability. Because of this situation, paradigms like MapReduce and bulk synchronous parallel (BSP) processing have become popular because these take the communication out of the program flow, so the programmer cannot muck this up, as otherwise would happen with the inevitability of destiny. Of course, our beloved SQL or declarative query in general does give scalability in many tasks without programmer participation. Datalog has also been used as a means of shipping computation around, as in the the work of Hellerstein.

There are no easy solutions. We have built scale-out conscious, vectorized extensions to SQL procedures where one can express complex parallel, distributed flows, but people do not use or understand these. These are very useful, even indispensable, but only on the inside, not as a programmer-facing construct. MapReduce and BSP are the limit of what a development culture will absorb. MapReduce and BSP do not hide the fact of distributed processing. What about things that do? Parallel, partitioned extensions to Fortran arrays? Functional languages? I think that all the obvious aids to parallel/distributed programming have been conceived of. No silver bullet; just hard work. And above all the discernment of what paradigm fits what problem. Since these are always changing, there is no finite set of rules, and no substitute for understanding and insight, and the latter are vanishingly scarce. "Paradigmatism," i.e., the belief that one particular programming model is a panacea outside of its original niche, is a common source of complexity and inefficiency. This is a common form of enthusiastic naïveté.

If you look at power efficiency, the clusters that are the easiest to program consist of relatively few high power machines and a fast network. A typical node size is 16+ cores and 256G or more RAM. Amazon has these in entirely workable configurations, as documented earlier on this blog. The leading edge in power efficiency is in larger number of smaller units, which makes life again harder. This exacerbates latency and forces one to partition the data more often, whereas one can play with replication of key parts of data more freely if the node size is larger.

One very specific item where research might help without having to rebuild the hardware stack would be better, lower-latency exposure of networks to software. Lightweight threads and user-space access, bypassing slow protocol stacks, etc. MPI has some of this, but maybe more could be done.

So, I will take a cluster of such 16-core, 256GB machines on a faster network, over a cluster of 1024 x 4G mobile phones connected via USB. Very selfish and unecological, but one has to stay alive and life is tough enough as is.

Are there pressures to adapt business models based on big data?

The transition from capex to opex may be approaching maturity, as there have been workable cloud configurations for the past couple of years. The EC2 from way back, with at best a 4 core 16G VM and a horrible network for $2/hr, is long gone. It remains the case that 4 months of 24x7 rent in the cloud equals the purchase price of physical hardware. So, for this to be economical long-term at scale, the average utilization should be about 10% of the peak, and peaks should not be on for more than 10% of the time.

So, database software should be rented by the hour. A 100-150% markup for the $2.80 a large EC2 instance costs would be reasonable. Consider that 70% of the cost in TPC benchmarks is database software.

There will be different pricing models combining different up-front and per-usage costs, just as there are for clouds now. If the platform business goes that way and the market accepts this, then systems software will follow. Price/performance quotes should probably be expressed as speed/price/hour instead of speed/price.

The above is rather uncontroversial but there is no harm restating these facts. Reinforce often.

Well, the question is raised, what should Europe do that would have tangible impact in the next 5 years?

This is a harder question. There is some European business in wide area and mobile infrastructures. Competing against Huawei will keep them busy. Intel and Mellanox will continue making faster networks regardless of European policies. Intel will continue building denser compute nodes, e.g., integrated Knight’s Corner with dual IB network and 16G fast RAM on chip. Clouds will continue making these available on demand once the technology is in mass production.

What’s the next big innovation? Neuromorphic computing? Quantum computing? Maybe. For now, I’d just do more engineering along the core competence discussed above, with emphasis on good marketing and scalable execution. By this I mean trained people who know something about deployment. There is a huge training gap. In the would-be "Age of Data," knowledge of how things actually work and scale is near-absent. I have offered to do some courses on this to partners and public alike, but I need somebody to drive this show; I have other things to do.

I have been to many, many project review meetings, mostly as a project partner but also as reviewer. For the past year, the EC has used an innovation questionnaire at the end of the meetings. It is quite vague, and I don’t think it delivers much actionable intelligence.

What would deliver this would be a venture capital type activity, with well-developed networks and active participation in developing a business. The EC is not now set up to perform this role, though. But the EC is a fairly large and wealthy entity, so it could invest some money via this type of channel. Also there should be higher individual incentives and rewards for speed and excellence. Getting the next Horizon 2020 research grant may be good, but better exists. The grants are competitive enough and the calls are not bad; they follow the times.

In the projects I have seen, productization does get some attention, e.g., the LOD2 stack, but it is not something that is really ongoing or with dedicated commercial backing. It may also be that there is no market to justify such dedicated backing. Much of the RDF work has been "me, too" — let’s do what the real database and data integration people do, but let’s just do this with triples. Innovation? Well, I took the best of the real DB world and adapted this to RDF, which did produce a competent piece of work with broad applicability, extending outside RDF. Is there better than this? Well, some of the data integration work (e.g., LIMES) is not bad, and it might be picked up by some of the players that do this sort of thing in the broader world, e.g., Informatica, the DI suites of big DB vendors, Tamr, etc. I would not know if this in fact adds value to the non-RDF equivalents; I do not know the field well enough, but there could be a possibility.

The recent emphasis for benchmarking, spearheaded by Stefano Bertolo is good, as exemplified by the LDBC FP7. There should probably be one or two projects of this sort going at all times. These make challenges known and are an effective means of guiding research, with a large multiplier: Once a benchmark gets adopted, infinitely more work goes into solving the problem than in stating it in the first place.

The aims and calls are good. The execution by projects is variable. For 1% of excellence, there apparently must be 99% of so-and-so, but this is just a fact of life and not specific to this context. The projects are rather diffuse. There is not a single outcome that gets all the effort. In this, the level of engagement of participants is less and focus is much more scattered than in startups. A really hungry, go-getter mood is mostly absent. I am a believer in core competence. Well, most people will agree that core competence is nice. But the projects I have seen do not drive for it hard enough.

It is hard to say exactly what kinds of incentives could be offered to encourage truly exceptional work. The American startup scene does offer high rewards and something of this could be transplanted into the EC project world. I would not know exactly what form this could take, though.

# PermaLink Comments [0]
06/29/2015 15:36 GMT-0500
Virtuoso updated to version 7.2.1

We're pleased to announce that Virtuoso 7.2.1 is now available, and includes various enhancements and bug fixes. Important additions include new support for xsd:boolean and TIMEZONE-less DATETIME & xsd:dateTime; and significantly improved compatibility with the Jena and Sesame Frameworks.

New product features as of June 24, 2015, v7.2.1, include:

  • Virtuoso Engine

    • Added support for TIMEZONE-less xsd:dateTime and DATETIME
    • Added support for xsd:boolean
    • Added new text index functions
    • Added better handling of HTTP status codes on SPARQL graph protocol endpoint
    • Added new cache for compiled regular expressions
    • Added support for expression in TOP/SKIP
  • SPARQL

    • Added support for SPARQL GROUPING SETS
    • Added support for SPARQL 1.1 EBV (Efficient Boolean Value)
    • Added support for define input:with-fallback-graph_uri
    • Added support for define input:target-fallback-graph-uri
  • Jena & Sesame Compatibility

    • Added support for using rdf_insert_triple_c() to insert BNode data
    • Added support for returning xsd:boolean as true/false rather than 1/0
    • Added support for maxQueryTimeout in Sesame2 provider
  • JDBC Driver

    • Added new methods setLogFileName and getLogFileName
    • Added new attribute "logFileName" to VirtuosoDataSources for logging support
  • Faceted Browser

    • Added support for emitting HTML5+Microdata instead of RDFa as default HTML page
    • Added query optimizations
    • Added new footer icons to /describe page
  • Conductor and DAV

    • Added support for VAD dependency tree
    • Added support for default vdirs when creating new listeners
    • Added support for private RDF graphs
    • Added support for LDP in DAV API
    • Added option to create shared folder if not present
    • Added option to enable/disable DET graphs binding
    • Added option to set content length threshold for asynchronous sponging
    • Added folder option related to .TTL redirection
    • Added functions to edit turtle files
    • Added popup dialog to search for unknown prefixes
    • Added registry option to add missing prefixes for .TTL files
More details of the additions, fixes, and other changes in this update of both Open Source and Commercial Editions, may be found on the Virtuoso News page. Additional Information:
# PermaLink Comments [0]
06/29/2015 15:21 GMT-0500
Virtuoso Elastic Cluster Benchmarks AMI on Amazon EC2

We have another new Amazon machine image, this time for deploying your own Virtuoso Elastic Cluster on the cloud. The previous post gave a summary of running TPC-H on this image. This post is about what the AMI consists of and how to set it up.

Note: This AMI is running a pre-release build of Virtuoso 7.5, Commercial Edition. Features are subject to change, and this build is not licensed for any use other than the AMI-based benchmarking described herein.

There are two preconfigured cluster setups; one is for two (2) machines/instances and one is for four (4). Generation and loading of TPC-H data, as well as the benchmark run itself, is preconfigured, so you can do it by entering just a few commands. The whole sequence of doing a terabyte (1000G) scale TPC-H takes under two hours, with 30 minutes to generate the data, 35 minutes to load, and 35 minutes to do three benchmark runs. The 100G scale is several times faster still.

To experiment with this AMI, you will need a set of license files, one per machine/instance, which our Sales Team can provide.

Detailed instructions are on the AMI, in /home/ec2-user/cluster_instructions.txt, but the basic steps to get up and running are as follows:

  1. Instantiate machine image ami-811becea) (AMI ID is subject to change; you should be able to find the latest by searching for "OpenLink Virtuoso Benchmarks" in "Community AMIs"; this one is short-named virtuoso-bench-cl) with two or four (2 or 4) R3.8xlarge instances within one virtual private cluster and placement group. Make sure the VPC security is set to allow all connections.

  2. Log in to the first, and fill in the configuration file with the internal IP addresses of all machines instantiated in step 1.

  3. Distribute the license files to the instances, and start the OpenLink License Manager on each machine.

  4. Run 3 shell commands to set up the file systems and the Virtuoso configuration files.

  5. If you do not plan to run one of these benchmarks, you can simply start and work with the Virtuoso cluster now. It is ready for use with an empty database.

  6. Before running one of these benchmark, generate the appropriate dataset with the dbgen.sh command.

  7. Bulk load the data with load.sh.

  8. Run the benchmark with run.sh.

Right now the cluster benchmarks are limited to TPC-H but cluster versions of the LDBC Social Network and Semantic Publishing benchmarks will follow soon.

# PermaLink Comments [0]
06/16/2015 17:53 GMT-0500 Modified: 06/17/2015 10:13 GMT-0500
In Hoc Signo Vinces (part 21 of n): Running TPC-H on Virtuoso Elastic Cluster on Amazon EC2

We have made an Amazon EC2 deployment of Virtuoso 7 Commercial Edition, configured to use the Elastic Cluster Module with TPC-H preconfigured, similar to the recently published OpenLink Virtuoso Benchmark AMI running the Open Source Edition. The details of the new Elastic Cluster AMI and steps to use it will be published in a forthcoming post. Here we will simply look at results of running TPC-H 100G scale on two machines, and 1000G scale on four machines. This shows how Virtuoso provides great performance on a cloud platform. The extremely fast bulk load — 33 minutes for a terabyte! — means that you can get straight to work even with on-demand infrastructure.

In the following, the Amazon instance type is R3.8xlarge, each with dual Xeon E5-2670 v2, 244G RAM, and 2 x 300G SSD. The image is made from the Amazon Linux with built-in network optimization. We first tried a RedHat image without network optimization and had considerable trouble with the interconnect. Using network-optimized Amazon Linux images inside a virtual private cloud has resolved all these problems.

The network optimized 10GE interconnect at Amazon offers throughput close to the QDR InfiniBand running TCP-IP; thus the Amazon platform is suitable for running cluster databases. The execution that we have seen is not seriously network bound.

100G on 2 machines, with a total of 32 cores, 64 threads, 488 GB RAM, 4 x 300 GB SSD

Load time: 3m 52s
Run Power Throughput Composite
1 523,554.3 590,692.6 556,111.2
2 565,353.3 642,503.0 602,694.9

1000G on 4 machines, with a total of 64 cores, 128 threads, 976 GB RAM, 8 x 300 GB SSD

Load time: 32m 47s
Run Power Throughput Composite
1 592,013.9 754,107.6 668,163.3
2 896,564.1 828,265.4 861,738.4
3 883,736.9 829,609.0 856,245.3

For the larger scale we did 3 sets of power + throughput tests to measure consistency of performance. By the TPC-H rules, the worst (first) score should be reported. Even after bulk load, this is markedly less than the next power score due to working set effects. This is seen to a lesser degree with the first throughput score also.

The numerical quantities summaries are available in a report.zip file, or individually --

Subsequent posts will explain how to deploy Virtuoso Elastic Clusters on AWS.

In Hoc Signo Vinces (TPC-H) Series

# PermaLink Comments [0]
06/10/2015 12:04 GMT-0500 Modified: 06/10/2015 12:49 GMT-0500
Introducing the OpenLink Virtuoso Benchmarks AMI on Amazon EC2

The OpenLink Virtuoso Benchmarks AMI is an Amazon EC2 machine image with the latest Virtuoso open source technology preconfigured to run —

  • TPC-H , the classic of SQL data warehousing

  • LDBC SNB, the new Social Network Benchmark from the Linked Data Benchmark Council

  • LDBC SPB, the RDF/SPARQL Semantic Publishing Benchmark from LDBC

This package is ideal for technology evaluators and developers interested in getting the most performance out of Virtuoso. This is also an all-in-one solution to any questions about reproducing claimed benchmark results. All necessary tools for building and running are included; thus any developer can use this model installation as a starting point. The benchmark drivers are preconfigured with appropriate settings, and benchmark qualification tests can be run with a single command.

The Benchmarks AMI includes a precompiled, preconfigured checkout of the v7fasttrack github repository, checkouts of the github repositories of the benchmarks, and a number of running directories with all configuration files preset and optimized. The image is intended to be instantiated on a R3.8xlarge Amazon instance with 244G RAM, dual Xeon E5-2670 v2, and 600G SSD.

Benchmark datasets and preloaded database files can be downloaded from S3 when large, and generated as needed on the instance when small. As an alternative, the instance is also set up to do all phases of data generation and database bulk load.

The following benchmark setups are included:

  • TPC-H 100G
  • TPC-H 300G
  • LDBC SNB Validation
  • LDBC SNB Interactive 100G
  • LDBC SNB Interactive 300G (SF3)
  • LDBC SPB Validation
  • LDBC SPB Basic 256 Mtriples (SF5)
  • LDBC SPB Basic 1 Gtriple

The AMI will be expanded as new benchmarks are introduced, for example, the LDBC Social Network Business Intelligence or Graph Analytics.

To get started:

  1. Instantiate machine image ami-eb789280 (AMI ID is subject to change; you should be able to find the latest by searching for "OpenLink Virtuoso Benchmarks" in "Community AMIs"; this one is short-named virtuoso-bench-6) with a R3.8xlarge instance.

  2. Connect via ssh.

  3. See the README (also found in the ec2-user's home directory) for full instructions on getting up and running.

# PermaLink Comments [0]
06/09/2015 11:51 GMT-0500 Modified: 06/18/2015 14:56 GMT-0500
SNB Interactive, Part 3: Choke Points and Initial Run on Virtuoso

In this post we will look at running the LDBC SNB on Virtuoso.

First, let's recap what the benchmark is about:

  1. fairly frequent short updates, with no update contention worth mentioning
  2. short random lookups
  3. medium complex queries centered around a person's social environment

The updates exist so as to invalidate strategies that rely too heavily on precomputation. The short lookups exist for the sake of realism; after all, an online social application does lookups for the most part. The medium complex queries are to challenge the DBMS.

The DBMS challenges have to do firstly with query optimization, and secondly with execution with a lot of non-local random access patterns. Query optimization is not a requirement, per se, since imperative implementations are allowed, but we will see that these are no more free of the laws of nature than the declarative ones.

The workload is arbitrarily parallel, so intra-query parallelization is not particularly useful, if also not harmful. There are latency constraints on operations which strongly encourage implementations to stay within a predictable time envelope regardless of specific query parameters. The parameters are a combination of person and date range, and sometimes tags or countries. The hardest queries have the potential to access all content created by people within 2 steps of a central person, so possibly thousands of people, times 2000 posts per person, times up to 4 tags per post. We are talking in the millions of key lookups, aiming for sub-second single-threaded execution.

The test system is the same as used in the TPC-H series: dual Xeon E5-2630, 2x6 cores x 2 threads, 2.3GHz, 192 GB RAM. The software is the feature/analytics branch of v7fasttrack, available from www.github.com.

The dataset is the SNB 300G set, with:

1,136,127 persons
125,249,604 knows edges
847,886,644 posts , including replies
1,145,893,841 tags of posts or replies
1,140,226,235 likes of posts or replies

As an initial step, we run the benchmark as fast as it will go. We use 32 threads on the driver side for 24 hardware threads.

Below are the numerical quantities for a 400K operation run after 150K operations worth of warmup.

Duration: 10:41.251
Throughput: 623.71 (op/s)

The statistics that matter are detailed below, with operations ranked in order of descending client-side wait-time. All times are in milliseconds.

% of total total_wait name count mean min max
20     % 4,231,130 LdbcQuery5 656 6,449.89    245 10,311
11     % 2,272,954 LdbcQuery8 18,354 123.84    14 2,240
10     % 2,200,718 LdbcQuery3 388 5,671.95    468 17,368
7.3   % 1,561,382 LdbcQuery14 1,124 1,389.13    4 5,724
6.7   % 1,441,575 LdbcQuery12 1,252 1,151.42    15 3,273
6.5   % 1,396,932 LdbcQuery10 1,252 1,115.76    13 4,743
5     % 1,064,457 LdbcShortQuery3PersonFriends 46,285 22.9979  0 2,287
4.9   % 1,047,536 LdbcShortQuery2PersonPosts 46,285 22.6323  0 2,156
4.1   % 885,102 LdbcQuery6 1,721 514.295   8 5,227
3.3   % 707,901 LdbcQuery1 2,117 334.389   28 3,467
2.4   % 521,738 LdbcQuery4 1,530 341.005   49 2,774
2.1   % 440,197 LdbcShortQuery4MessageContent 46,302 9.50708 0 2,015
1.9   % 407,450 LdbcUpdate5AddForumMembership 14,338 28.4175  0 2,008
1.9   % 405,243 LdbcShortQuery7MessageReplies 46,302 8.75217 0 2,112
1.9   % 404,002 LdbcShortQuery6MessageForum 46,302 8.72537 0 1,968
1.8   % 387,044 LdbcUpdate3AddCommentLike 12,659 30.5746  0 2,060
1.7   % 361,290 LdbcShortQuery1PersonProfile 46,285 7.80577 0 2,015
1.6   % 334,409 LdbcShortQuery5MessageCreator 46,302 7.22234 0 2,055
1     % 220,740 LdbcQuery2 1,488 148.347   2 2,504
0.96  % 205,910 LdbcQuery7 1,721 119.646   11 2,295
0.93  % 198,971 LdbcUpdate2AddPostLike 5,974 33.3062  0 1,987
0.88  % 189,871 LdbcQuery11 2,294 82.7685  4 2,219
0.85  % 182,964 LdbcQuery13 2,898 63.1346  1 2,201
0.74  % 158,188 LdbcQuery9 78 2,028.05    1,108 4,183
0.67  % 143,457 LdbcUpdate7AddComment 3,986 35.9902  1 1,912
0.26  % 54,947 LdbcUpdate8AddFriendship 571 96.2294  1 988
0.2   % 43,451 LdbcUpdate6AddPost 1,386 31.3499  1 2,060
0.0086% 1,848 LdbcUpdate4AddForum 103 17.9417  1 65
0.0002% 44 LdbcUpdate1AddPerson 2 22       10 34

At this point we have in-depth knowledge of the choke points the benchmark stresses, and we can give a first assessment of whether the design meets its objectives for setting an agenda for the coming years of graph database development.

The implementation is well optimized in general but still has maybe 30% room for improvement. We note that this is based on a compressed column store. One could think that alternative data representations, like in-memory graphs of structs and pointers between them, are better for the task. This is not necessarily so; at the least, a compressed column store is much more space efficient. Space efficiency is the root of cost efficiency, since as soon as the working set is not in memory, a random access workload is badly hit.

The set of choke points (technical challenges) actually revealed by the benchmark is so far as follows:

  • Cardinality estimation under heavy data skew — Many queries take a tag or a country as a parameter. The cardinalities associated with tags vary from 29M posts for the most common to 1 for the least common. Q6 has a common tag (in top few hundred) half the time and a random, most often very infrequent, one the rest of the time. A declarative implementation must recognize the cardinality implications from the literal and plan accordingly. An imperative one would have to count. Missing this makes Q6 take about 40% of the time instead of 4.1% when adapting.

  • Covering indices — Being able to make multi-column indices that duplicate some columns from the table often saves an entire table lookup. For example, an index on post by author can also contain the post's creation date.

  • Multi-hop graph traversal — Most queries access a two-hop environment starting at a person. Two queries look for shortest paths of unbounded length. For the two-hop case, it makes almost no difference whether this is done as a union or a special graph traversal operator. For shortest paths, this simply must be built into the engine; doing this client-side incurs prohibitive overheads. A bidirectional shortest path operation is a requirement for the benchmark.

  • Top K Most queries returning posts order results by descending date. Once there are at least k results, anything older than the kth can be dropped, adding a date selection as early as possible in the query. This interacts with vectored execution, so that starting with a short vector size more rapidly produces an initial top k.

  • Late projection — Many queries access several columns and touch millions of rows but only return a few. The columns that are not used in sorting or selection can be retrieved only for the rows that are actually returned. This is especially useful with a column store, as this removes many large columns (e.g., text of a post) from the working set.

  • Materialization — Q14 accesses an expensive-to-compute edge weight, the number of post-reply pairs between two people. Keeping this precomputed drops Q14 from the top place. Other materialization would be possible, for example Q2 (top 20 posts by friends), but since Q2 is just 1% of the load, there is no need. One could of course argue that this should be 20x more frequent, in which case there could be a point to this.

  • Concurrency control — Read-write contention is rare, as updates are randomly spread over the database. However, some pages get read very frequently, e.g., some middle level index pages in the post table. Keeping a count of reading threads requires a mutex, and there is significant contention on this. Since the hot set can be one page, adding more mutexes does not always help. However, hash partitioning the index into many independent trees (as in the case of a cluster) helps for this. There is also contention on a mutex for assigning threads to client requests, as there are large numbers of short operations.

In subsequent posts, we will look at specific queries, what they in fact do, and what their theoretical performance limits would be. In this way we will have a precise understanding of which way SNB can steer the graph DB community.

SNB Interactive Series

# PermaLink Comments [0]
06/09/2015 11:35 GMT-0500
The Virtuoso Science Library

There is a lot of scientific material on Virtuoso, but it has not been presented all together in any one place. So I am making here a compilation of the best resources with a paragraph of introduction on each. Some of these are project deliverables from projects under the EU FP7 programme; some are peer-reviewed publications.

For the future, an updated version of this list may be found on the main Virtuoso site.

European Project Deliverables

  • GeoKnow D 2.6.1: Graph Analytics in the DBMS (2015-01-05)

    This introduces the idea of unbundling basic cluster DBMS functionality like cross partition joins and partitioned group by to form a graph processing framework collocated with the data.

  • GeoKnow D2.4.1: Geospatial Clustering and Characteristic Sets (2015-01-06)

    This presents experimental results of structure-aware RDF applied to geospatial data. The regularly structured part of the data goes in tables; the rest is triples/quads. Furthermore, for the first time in the RDF space, physical storage location is correlated to properties of entities, in this case geo location, so that geospatially adjacent items are also likely adjacent in the physical data representation.

  • LOD2 D2.1.5: 500 billion triple BSBM (2014-08-18)

    This presents experimental results on lookup and BI workloads on Virtuoso cluster with 12 nodes, for a total of 3T RAM and 192 cores. This also discusses bulk load, at up to 6M triples/s and specifics of query optimization in scale-out settings.

  • LOD2 D2.6: Parallel Programming in SQL (2012-08-12)

    This discusses ways of making SQL procedures partitioning-aware, so that one can, map-reduce style, send parallel chunks of computation to each partition of the data.

Publications

2015

  • Minh-Duc, Pham, Linnea, P., Erling, O., and Boncz, P.A. "Deriving an Emergent Relational Schema from RDF Data," WWW, 2015.

    This paper shows how RDF is in fact structured and how this structure can be reconstructed. This reconstruction then serves to create a physical schema, reintroducing all the benefits of physical design to the schema-last world. Experiments with Virtuoso show marked gains in query speed and data compactness.

2014

2013

2012

  • Orri Erling: Virtuoso, a Hybrid RDBMS/Graph Column Store. IEEE Data Eng. Bull. (DEBU) 35(1):3-8 (2012)

    This paper introduces the Virtuoso column store architecture and design choices. One design is made to serve both random updates and lookups as well as the big scans where column stores traditionally excel. Examples are given from both TPC-H and the schema-less RDF world.

  • Minh-Duc Pham, Peter A. Boncz, Orri Erling: S3G2: A Scalable Structure-Correlated Social Graph Generator. TPCTC 2012:156-172

    This paper presents the basis of the social network benchmarking technology later used in the LDBC benchmarks.

2011

2009

  • Orri Erling, Ivan Mikhailov: Faceted Views over Large-Scale Linked Data. LDOW 2009

    This paper introduces anytime query answering as an enabling technology for open-ended querying of large data on public service end points. While not every query can be run to completion, partial results can most often be returned within a constrained time window.

  • Orri Erling, Ivan Mikhailov: Virtuoso: RDF Support in a Native RDBMS. Semantic Web Information Management 2009:501-519

    This is a general presentation of how a SQL engine needs to be adapted to serve a run-time typed and schema-less workload.

2008

2007

  • Orri Erling, Ivan Mikhailov: RDF Support in the Virtuoso DBMS. CSSW 2007:59-68

    This is an initial discussion of RDF support in Virtuoso. Most specifics are by now different but this can give a historical perspective.

# PermaLink Comments [0]
06/03/2015 12:51 GMT-0500 Modified: 06/03/2015 12:53 GMT-0500
SNB Interactive, Part 2: Modeling Choices

SNB Interactive is the wild frontier, with very few rules. This is necessary, among other reasons, because there is no standard property graph data model, and because the contestants support a broad mix of programming models, ranging from in-process APIs to declarative query.

In the case of Virtuoso, we have played with SQL and SPARQL implementations. For a fixed schema and well known workload, SQL will always win. The reason is that SQL allows materialization of multi-part indices and data orderings that make sense for the application. In other words, there is transparency into physical design. An RDF/SPARQL-based application may also have physical design by means of structure-aware storage, but this is more complex and here we are just concerned with speed and having things work precisely as we intend.

Schema Design

SNB has a regular schema described by a UML diagram. This has a number of relationships, of which some have attributes. There are no heterogenous sets, i.e., no need for run-time typed attributes or graph edges with the same label but heterogenous end-points. Translation into SQL or SPARQL is straightforward. Edges with attributes (e.g., the foaf:knows relation between people) would end up represented as a subject with the end points and the effective date as properties. The relational implementation has a two-part primary key and the effective date as a dependent column. A native property graph database would use an edge with an extra property for this, as such are typically supported.

The only table-level choice has to do with whether posts and comments are kept in the same or different data structures. The Virtuoso schema uses a single table for both, with nullable columns for the properties that occur only in one. This makes the queries more concise. There are cases where only non-reply posts of a given author are accessed. This is supported by having two author foreign key columns each with its own index. There is a single nullable foreign key from the reply to the post/comment being replied to.

The workload has some frequent access paths that need to be supported by index. Some queries reward placing extra columns in indices. For example, a common pattern is accessing the most recent posts of an author or a group of authors. There, having a composite key of ps_creatorid, ps_creationdate, ps_postid pays off since the top-k on creationdate can be pushed down into the index without needing a reference to the table.

The implementation is free to choose data types for attributes, particularly datetimes. The Virtuoso implementation adopts the practice of the Sparksee and Neo4j implementations and represents this is a count of milliseconds since epoch. This is less confusing, faster to compare, and more compact than a native datetime datatype that may or may not have timezones, etc. Using a built-in datetime seems to be nearly always a bad idea. A dimension table or a number for a time dimension avoids the ambiguities of a calendar or at least makes these explicit.

The benchmark allows procedurally maintained materializations of intermediate results for use by queries as long as these are maintained transaction-by-transaction. For example, each person could have the 20 newest posts by their immediate contacts precomputed. This would reduce Q2 "top of the wall" to a single lookup. This does not however appear to be worthwhile. The Virtuoso implementation does do one such materialization for Q14: A connection weight is calculated for every pair of persons that know each other. This is related to the count of replies by either to content generated by the other. If there does not exist a single reply in either direction, the weight is taken to be 0. This weight is precomputed after bulk load and subsequently maintained each time a reply is added. The table for this is the only row-wise structure in the schema and represents a half-matrix of connected people, i.e., person1, person2 -> weight. Person1 is by convention the one with the smaller p_personid. Note that comparing IDs in this way is useful but not normally supported by SPARQL/RDF systems. SPARQL would end up comparing strings of URIs with disastrous performance implications unless an implementation-specific trick were used.

In the next installment, we will analyze an actual run.

SNB Interactive Series

# PermaLink Comments [0]
05/14/2015 11:37 GMT-0500 Modified: 06/09/2015 11:36 GMT-0500
SNB Interactive, Part 1: What is SNB Interactive Really About?

This is the first in a series of blog posts analyzing the Interactive workload of the LDBC Social Network Benchmark. This is written from the dual perspective of participating in the benchmark design, and of building the OpenLink Virtuoso implementation of same.

With two implementations of SNB Interactive at four different scales, we can take a first look at what the benchmark is really about. The hallmark of a benchmark implementation is that its performance characteristics are understood; even if these do not represent the maximum of the attainable, there are no glaring mistakes; and the implementation represents a reasonable best effort by those who ought to know such, namely the system vendors.

The essence of a benchmark is a set of trick questions or "choke points," as LDBC calls them. A number of these were planned from the start. It is then the role of experience to tell whether addressing these is really the key to winning the race. Unforeseen ones will also surface.

So far, we see that SNB confronts the implementor with choices in the following areas:

  • Data model — Tabular relational (commonly known as SQL), graph relational (including RDF), property graph, etc.

  • Physical storage model — Row-wise vs. column-wise, for instance.

  • Ordering of materialized data — Sorted projections, composite keys, replicating columns in auxiliary data structures, etc.

  • Persistence of intermediate results —  Materialized views, triggers, precomputed temporary tables, etc.

  • Query optimization — join order/type, interesting physical data orderings, late projection, top k, etc.

  • Parameters vs. literals — Sometimes different parameter values result in different optimal query plans.

  • Predictable, uniform latency — Measurement rules stipulate the the SUT (system under test) must not fall behind the simulated workload.

  • Durability — How to make data durable while maintaining steady throughput, e.g., logging, checkpointing, etc.

In the process of making a benchmark implementation, one naturally encounters questions about the validity, reasonability, and rationale of the benchmark definition itself. Additionally, even though the benchmark might not directly measure certain aspects of a system, making an implementation will take a system past its usual envelope and highlight some operational aspects.

  • Data generation — Generating a mid-size dataset takes time, e.g., 8 hours for 300G. In a cloud situation, keeping the dataset in S3 or similar is necessary; re-generating every time is not an option.

  • Query mix — Are the relative frequencies of the operations reasonable? What bias does this introduce?

  • Uniformity of parameters — Due to non-uniform data distributions in the dataset, there is easily a 100x difference between "fast" and "slow" cases of a single query template. How long does one need to run to balance these fluctuations?

  • Working set — Experience shows that there is a large difference between almost-warm and steady-state of working set. This can be a factor of 1.5 in throughput.

  • Reasonability of latency constraints — In the present case, a qualifying run must have no more than 5% of all query executions starting over 1 second late. Each execution is scheduled beforehand and done at the intended time. If the SUT does not keep up, it will have all available threads busy and must finish some work before accepting new work, so some queries will start late. Is this a good criterion for measuring consistency of response time? There are some obvious possibilities for abuse.

  • Ease of benchmark implementation/execution — Perfection is open-ended and optimization possibilities infinite, albeit with diminishing returns. Still, getting started should not be too hard. Since systems will be highly diverse, testing that these in fact do the same thing is important. The SNB validation suite is good for this and, given publicly available reference implementations, the effort of getting started is not unreasonable.

  • Ease of adjustment — Since a qualifying run must meet latency constraints while going as fast as possible, setting the performance target involves trial and error. Does the tooling make this easy?

  • Reasonability of durability rule — Right now, one is not required to do checkpoints but must report the time to roll forward from the last checkpoint or initial state. Inspiring vendors to build faster recovery is certainly good, but we are not through with all the implications. What about redundant clusters?

The following posts will look at the above in light of actual experience.

SNB Interactive Series

# PermaLink Comments [0]
05/14/2015 11:37 GMT-0500 Modified: 06/09/2015 11:36 GMT-0500
DBpedia Usage Report

We've just published the latest DBpedia Usage Report, covering v3.3 (released July, 2009) to v3.9 (released September, 2013); v3.10 (sometimes called "DBpedia 2014"; released September, 2014) will be included in the next report.

We think you'll find some interesting details in the statistics. There are also some important notes about Virtuoso configuration options and other sneaky technical issues that can surprise you (as they did us!) when exposing an ad-hoc query server to the world.

# PermaLink Comments [0]
01/07/2015 15:12 GMT-0500 Modified: 01/07/2015 15:13 GMT-0500
 <<     | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |     >>
Powered by OpenLink Virtuoso Universal Server
Running on Linux platform
OpenLink Software 1998-2006