In my last post, I outlined a few benefits that developers could gain from using KRL. Well and good, but so what? Are those benefits that matter? How do they relate to the other reasons programmers learn new languages like money and opportunity? In this post, I will talk about some of the big trends driving my thinking and informing the decisions around what KRL is and why it enables the abstractions that it does.
There are three trends I'd like to examine:
- The rise of real-time data and the evented Web
- The rise of personal data and user-centricity
- The rise of context as an organizing and extending framework for content
We'll examine these in turn.
Real-time Data and Events
In their book The Power of Pull, John Hagel, John Seely Brown, and Lang Davison describe some of the incredible changes impacting businesses. In of these, they describe how the world is moving from valuing "stocks" of knowledge to "flows" of knowledge. Anyone spending more than a few minutes on the Web can see how this is true. We don't amass information and hoard it, rather we search for it when we need it.
More and more companies are putting the data and services that drive their Web sites online using an API. Euphemistically the result of this move has been called "the cloud." There are good reasons why cloud-based data and services is gaining traction: Cloud-based services are more accessible, more convenient, and cheaper than equivalent services delivered using more traditional means.
As I write this in April 2011, Programmableweb.com, a directory of online APIs, lists over 2300 APIs in its index. This number is certain to grow. The list includes APIs for searching, financial services, blogging, ad networks, dating, email, government, security shopping, and so on. Some are free and others charge money. Some are personalized (like my Twitter friend feed) and others are general information (like the Google news feed). APIs are the unit of programming on the Web---similar to libraries in traditional applications.
But mere APIs aren't enough. To see why, imagine two scenarios. In the first, your teenage daughter is out with friends. Her curfew is midnight but it's 12:20am and you haven't heard a word. You're worried, imagining an accident on the freeway, or worse. You're calling her cell, her friends, and trying to keep calm.
In the second scenario, your daughter is again out with friends, but this time, a few minutes before twelve, you get a call that goes something like "Hi Mom. I'm going to be 15 minutes late...the movie ran long."
One scenario is filled with hassle and anxiety---the other with convenience and tranquility. There's no doubt which scenario we'd rather be in. And yet, online, we're rarely in a situation where a service anticipates our needs and meets them without prodding on our part. The location metaphor of the static Web puts in the mode of "seek and find" or "go and get."
When an API merely responds to requests, it's like a program that only accepts input, but can never send its output to another application unless it's asked first. We call such APIs half-duplex APIs and readers familiar with the Web and the underlying client-server model will recognize the roots of half-duplex APIs in the foundational technologies of the Web.
In the early days of the Web, all that mattered were domain names and Web pages---brochure-ware, I called it. Later the name Web 2.0 embodied the idea of interactive Web sites where users could actually do something beyond filling out simple forms. The earliest interactive Web services were ecommerce tools in the late 1990's. Later the idea of interactive Web sites extended to all kinds of services from finance to document editing.
Interactive Web services have several problems. First, they tend to be silos that interact with other sites and services in only the most rudimentary ways. If the service doesn't do everything that the user wants, there's almost never a way to combine two services together to solve a unique problem. Second, and more problematically, as we've seen, they are built on a request-response model that requires the user to initiate each interaction.
In contrast, some web applications are beginning to push information to users. This is not only more convenient for users, but creates data streams that other tools can combine to solve problems that require mashing up information from multiple sources. We call the set of technologies and practices that enable users to receive information as soon as it is published by its authors the real-time Web.
You don't have go any further than the Facebook, Twitter, or Foursquare to see the real-time Web in action. These services aren't just interactive Web sites, but are creating streams of data about the people you follow. The stream of tweets from my friends is available to me in a variety of places without me needing to visit any particular Web site. RSS, SMS, and even plain old email are all ways of notifying that information is available now.
The real-time Web won't replace the interactive Web---we'll always need Web sites---but the real-time Web will augment it in important ways. Real-time is a trend that is so natural when we think of what people want and what they pay for that its hard to imagine the world any other way once you experience it. People want to be connected all the time. But more than that, they want information to come to them when its useful or interesting.
Personal Data and User Centricity
One of the hottest trends of the last several years is user-controlled identity. Just as you'd expect, user-controlled identity places the user structurally inside decisions regarding the user's identity information. This is a sharp contrast to how enterprise systems usually talk about the user behind her back.
User-controlled identity systems are coming along just in time. Organizations are increasingly finding that storing and managing personal data is expensive, prone to error and inaccuracy, and undermines, rather than strengthens, the relationship of trust they want with their customers. People resent having data about them used, without their permission, by companies for profit.
There have been some significant advances in the technology of identity and personal data as well. OpenID, OAuth, Webfinger and the like create opportunities for applications to user personal data without having to store and be responsible for it.
With the advent of user-controlled identity services, we can envision a future where users more freely link together their personal data, stored in online services. Personal data has the power to make our online interactions more meaningful (as a simple example, imagine a world where online ads are random vs. a world where they are based on things that interest you). Laying aside privacy concerns most people would rather see relevant ads than random ones. What's more, user-controlled identity will make such sharing more private and secure.
People are already becoming socialized to the idea of keeping data about themselves online through services like Facebook and LinkedIn. If you want someone's current email or phone, chances are better that they've kept their Facebook page up to date than that they've contacted you so you can update you personal list. The idea that we'll have one place for canonical versions of personal data is gaining momentum.
OAuth and more sophisticated protocols like UMA allow people to assemble virtual stores of personal data from the various services they use around the Web. These virtual stores will also allow you to augment the attributes that various services keep about you to assemble as complete a picture as you'd like. Because it's all under your control.
We usually speak of identity in the singular, but in fact, we all have multiple identities. I view all of my accounts as linked together and they seem like different facets of my overall identity. The problem is that the only thing linking them together is me, the person. From the outside looking in, they appear disjointed and incomplete. My bank sees one set of attributes and my employer sees another. Occasionally they link up, such as when I give my employer my checking account number for direct deposit, but they mostly exist in utter, frustrating isolation.
This may seem like a good thing from a privacy standpoint, but we've all suffered from the hassle of having data in one place that we can't use somewhere else. The message of user-controlled identity is we don't have to sacrifice privacy for convenience. We can link all of this data through an online agent under our control.
We tend to think about the potential of better access to personal data from the perspective of what's happening now, but it's also instructive to examine what could be. For example, in his book, Pull, David Siegel describes a golfing scenario where your clubs, bags and even the balls and cups communicate with each other and transmit information about your game to a personal data store in the cloud.
In this scenario, as you play golf, the score is calculated as data comes in. You see your score on your phone. You don't calculate the strokes; rather your strokes and even how far your ball is from the hole are calculated for you . Your phone can even help you find your ball in the tall grass. After the game, you can replay it on a map of the course and get analysis about how you could have played better. While I can't guarantee that your game will improve, this system will keep your fellow golfers honest about their game.
As this example illustrates, even the most mundane data about us is likely to find its way online. Endomondo records information about my bike rides and puts it online. The Withings Bodyscale has WiFi built in and pushes your weight to the Web every time you weigh yourself. A FitBit will record your daily activity and push it to an online account. As the world becomes increasingly instrumented more and more of this kind of data will find its way into the cloud.
Even greater instrumentation is only as far away as your phone. Most of today's smart phones are really sophisticated mobile sensing platforms that also happen to make calls. App stores are full of software that uses those sensors to collect data on our behalf and push it into the cloud. This "exhaust data," as it's called, will create new opportunities for people and the companies that serve them.
In Storytime on the Interwebs, Venkatesh Rao makes an important connection between "plural, interconnected and dynamic 'mesh' experiences," what I'd call Live Web applications, and stories. The relationship is more than analogy. Every good Web experience is a story and, knowing that, we can use the techniques of story telling to help us better understand them.
Every story has certain components: Stories have a purpose or intent---the thing that defines them. Stories have a context that creates a model of what information is relevant. They need a tempo to move them along and define the timeframe. Stories have a narrative structure that connects things together through the beginning, middle and end as well as connecting the past, present and future. And stories have characters---antagonists, protagonists, and supporting cast.
Building applications is, at its best, story telling. The best application developers are storytellers. That's why techniques like storyboarding work as well in designing software as they do in creating books or movies.
Understanding what information is relevant to a particular story is critical to creating a story that is compelling and interesting. Context isn't just about what you include, it's also about what you leave out. In Rao's article, he says:
"When my sister and I were kids, she used to ask apparently odd questions about scenes in movies. Say two people are chatting over tea on the screen. Cut to a scene where one of them is no longer holding a tea-cup. My sister would ask, 'What happened to the teacup?'"
"The answer is, 'Who cares?'"
"Presumably the character put it down, and somebody later washed it. It doesn't matter. If the teacup were significant, the storyteller would usually foreshadow events to come by noting it without explanation (such as a close-up shot of the cup after the people have left the room; perhaps signaling a potential poisoning)."
"So context is not merely about retaining developing momentum, it is about doing so selectively. You have to decide what is significant in the story."
Context is the model we use to make sense of the overabundance of information that any system faces. In the world of the real-time Web, no one wants to be constantly interrupted by events and notifications. Filters are the common answer, but filters need to be more than mere sieves. They need to be smart. They need context.
Context is related to intent. For example, if I show you a handful of URLs, you will interpret them differently if I tell you my intent is to plan a trip or if I tell you my intent is to write a research report for my international relations class. The context changes and with it, the relevance of other information we might run across.
When we tell stories on the Web, we use context to relate events to each other in meaningful ways. One of the real powers of event-based programming is the ability to contextually correlate events. Correlation goes beyond filtering, because it allows us to look for patterns in the event stream that are meaningful for a particular purpose. Contextual correlation can use patterns to explicitly link events across time and event domains.
Context can also be used to correlate events implicitly. In many ways, implicit event correlation is the most powerful because it has the power to allow serendipitous interactions. Serendipitous interactions provide for uses that designers didn't anticipate---perhaps couldn't anticipate. The inherent loose coupling of event-based systems and their resilience in the face of error enables serendipitous interaction and is one of the primary ways to create Web applications that respond dynamically and flexibly to user intent.
Context is not explicitly bound to a specific process or event stream but is relevant to multiple processes through implicit relationships. Knowing, for example, the time of day, the weather, the prime rate, commodity prices, or that it's Super Bowl Sunday provides information that can be used to correlate events. These correlations will be made in the logic of the Live Web applications respond to the events.
Content is more valuable when it's in context. And every user has their own context. Only a system designed to respond to events important to the user on a personal level can deliver the value associated with content in context. That's what KRL is designed to enable programmers to do.
These trends are significant and real. I believe they offer important and valuable benefits to people and will result in a Web that we can hardly recognize today.
In my last post, I offered up three key benefits for developers using KRL. The trends I've described above are related to each of those benefits and give clear reasons why I believe developers will care about these benefits in building future Web applications.
KRL's event expression language provides an abstraction that is as important and powerful for handling dynamic data streams as SQL was in dealing with databases. Event expressions (or eventexs) provide a common language for reacting to and talking about events that people care about. I cannot overstate the importance of eventexs to what KRL is and why it's different from every other programming language you know.
KRL's built-in support for personalization goes well beyond merely saving developers from having to build a few login screens and the associated logic. KRL recognizes at its foundation that personal data is important and makes it easy to keep track of the separate entities using an application. More importantly, KRL also makes it simple for developers to link in other data from other cloud-based services. The system underlying KRL, the Kinetic Rule Engine or KRE, is built from the ground up to support user-centricity and the interactions that it requires.
Having a lot of personal data hanging around provides little value in and of itself. Facebook isn't valuable because they have lots of data about you, but because of what they allow you to do with that data. Similarly, the personal data trend demands applications that use the data to help people accomplish valuable tasks. People value interactions--the data in motion--not the mere stockpiling of it. KRL is positioned to build applications that use personal data.
On the Web, creating contextual applications often means breaking down the silos of individual Web applications. People don't go online to visit Web sites. They go online to get something done--achieve some purpose. The browser is the perfect platform for deploying applications that cross organizational boundaries and the artificial barriers they impose.
Everyone I know can relate to the idea that they often have to cut and paste data out of one Web site and into another. They keep the context of their interactions in their head and flip between browser windows or tabs to create an experience that gets the job done. KRL's ability to link data from multiple Web applications and use it in context is a core value that the language and it's underlying system provide. Using KRL, building cross-domain Web applications is simple and quick.
You might think it silly to design a language around trends, but frankly that's where many new languages come from. PHP became popular because it supported the need for data-driven Web sites. Rails (a domain specific language in Ruby) became popular because it made creating interactive, Web 2.0-style application easier. If I'm right and real-time, event-driven, personalized, contextual Web applications are going to rule the future, then a language that supports building them isn't a bad idea--it's essential. Without the abstractions that a language provides, the job is just too complicated otherwise.
The ideas within KRL are important and the abstractions that it supports are timely. Install a few KRL applications to try them out yourself. Write a KRL program if you're a developer. We're anxious to get your feedback.