Buyers and practitioners in the data market have faced a gut wrenching choice: work with Big Data and tolerate the latency of batch mode processing, or work interactively with relatively small data sets. To be sure, each of these two use cases has been a sweet spot for available technologies: Hadoop has excelled at storing huge volumes of data cheaply and processing it with the batch-oriented MapReduce algorithm; relational databases work interactively, but on smaller data volumes with more expensive storage economics.
Are organizations locked into this dichotomy? Can we have all our data and process it interactively too? Might we make Hadoop not just a cheaper data warehouse but a radically improved operational database? And, if so, what technologies are needed to make that happen?