I have been spending some time with the latest media tablet, the BlackBerry PlayBook and in my last write-up, I covered the BlackBerry PlayBook Bridge feature. While in its infancy, the PlayBook Bridge provides some advantages, particularly in enterprises with an investment in BlackBerry Enterprise Server (BES). To consumers, there is value in the shared browser, basically enabling tethering without an additional fee. What I’d like to take a look at are some future bridging capabilities I believe could benefit the consumer even more.
What Is Bridging?
Simply put, bridging is peer-to-peer communication between electronic devices. Devices can connect today, but require networking protocols and hardware that add complexity and latency. If you haven’t read my blog on the PlayBook Bridge, this shows a bridging implementation today between the PlayBook tablet and a BlackBerry smartphone. In the future, data sets will be larger, but so will the wireless pipes that connect the data. New generations peer to peer or PAN technologies of WiFi, WiGig, and Bluetooth will make the fast, wireless transfer of data even quicker and with less latency versus a LAN or WAN.
Below are a few useful “bridging” functions that I can see adding benefit for the consumer:
If I am bridged to another device, I would like to share the peripherals it is connected to and vice versa. Specifically, I would like to more easily connect to the other device’s display, storage, network, and printers. The benefits are ease of use in that I only need to connect to the host device and not duplicate or triplicate a complex setup.
When I walk into my home office in the future, I’d like to press one button on my smartphone and display what’s on its multiple screens on my multiple large monitors. Imagine five Honeycomb screens lighting up my five 17″ monitors. I’d also like to bridge to my large workstation to leverage its terabytes of storage and printers without having to go through the router or be mired in network, WAN, DLNA or PnP hell. I understand that it is possible to connect many of these peripherals today, but setup and reliability is lacking.
With operating systems and application environments splintering at a fast rate, the odds are high that a consumer could have four devices that don’t run any of the same code. That person could have an Android smartphone, an iPad, a set top box and a Windows PC. Sure, many ISVs recode for different OSes and application environments, but can they afford to do this in the future and what about the Tier 2 applications or even the long tail?
So if I have multiple CPUs and GPUs on each device with multiple operating systems, why can’t I dedicate a few of those cores and SIMDs to emulate or virtualize different operating systems and application environments? This way, if I have a smartphone, media tablet, convertible tablet, set top box, notebook and desktop with different operating systems, they could run each other’s applications. For example, if my son was playing Angry Birds on my iOS-based tablet but I wanted to access iMovie, I’d like to use my six core, AMD PhenomTM II X6 based desktop and emulate iOS in Windows.
Sound far-fetched? Well, I see some emerging implementations of it today. Do you see how the Atrix Lapdockdisplays an Android phone window in the Motorola WebTop environment? That’s a decent representation of what could be architected. Also, have you ever heard of a company named BlueStacks? It is reported that it will allow Android apps to run on Windows. It is also being reported that HP will preinstall webOS on all its PCs.
There is a very understandable counter to this proposal. It says that apps will continue to be optimized for a form factor. The ability to timeshare those devices and still live within their power budget will be impossible.
Time will tell and it comes down to a tradeoff between two different types of complexity and investments.
I like to describe this as “RDP + Microsoft Remote FX on steroids” where it appears both bridged devices can “run” each application but in fact it is a highly compressed and reformatted presentation optimized for the receiving device.
For instance, let’s say I’d like to “run” Office Mail from my iPad when I am at work. I already have Office on my desktop at work, so I’d like to remote desktop to my system, have it super compress and only display specific visual parts of my inbox so it’s appropriate for my iPad. I am actually doing most of the processing at my powerful desktop but it appears it is happening on my iPad. For that matter, my “desktop” could be hosted at a data center in a different state.
I would describe this bridging as DLNA that works reliably and without the network middleman. The concept is simple: I directly bridge to a device and get access to all its music, movies, videos, documents and games with the appropriate DRM handshaking for paid content. Content is transcoded back and forth in the appropriate bit rate and format. There is some debate whether there would even need to be a DRM handshake as long as the device truly playing the content were authorized and just mirroring that content.
Yes, there are aspects of content sharing that can be done today but it requires special client software, DLNA, LAN, WAN, and service providers which combine for complexity soup.
Similar to the application sharing scenario, I have multiple compute cores and SIMDs to accomplish this feat, and of course some amazing software development.
In the future, bridging could bring an immense benefit to the consumer. The vision is a consumer more seamlessly interacting with devices and content they already own without the overhead and complexity of networks. Software advances are already underway and certainly there will be enough compute power to enable this.