I've been fortunate to work in --well, at least dabble in -- many areas of Computer Science over the past 30-odd years. And, yes, a good many of them have been fairly odd... I've worked in CAD for VLSI (basically, logic synthesis and verification), computer algebra, natural-language processing, distributed systems and networking. (Oddly, through a chain of acquisitions, SAP wound up owning my NLP patents; and those and a quarter will get you a newspaper. But it's weird to meet those things again after all these years...). These days I mostly fool around in web front ends and distributed infrastructures. I'm fortunate enough to work with the Lively Web guys a lot, and I spend a fair amount of my time writing applications with some Lively components and, in particular, thinking about applications that live in the Cloud but really want to be near the end-user, and so require a distributed cloud infrastructure. In addition to working at SAP, I have a couple of synergistic jobs. I'm an Adjunct Professor of Computer Science at the University of Victoria, and I almost always have a couple of Master's students on the go at any time, and two or three more in Directed Studies courses. I'm also the Chief Scientist of US Ignite, and in that capacity advance Ignite's goal of seeding the next generation of network applications in the public interest. I'll talk about that below.
Rick McGeer rick@cdglabs.org
The GENI Experiment Engine The Global Environment for Network Innovations (GENI) is a distributed cloud and network testbed, with over 5000 cores distributed over 50 sites across the United States. GENI features deep programmability at all levels of the networking and computing stack, in which private virtual networks are created across the infrastructure. For the past couple of years, working with Northwestern University and Princeton University, I've been building a container-as-a-service plaform across GENI. The GENI Experiment Engine (the GEE) offers one-click allocation of slices to GENI users (essentially, anyone with an account at any higher-educational institution in the US), giving people Docker containers running a bare Ubuntu 14.04 LTE image at 20 sites across the United States. Over the next year, we plan to extend this capability in a number of ways: ● Offer multiple images and containers at each site, to permit people both to create their own images and to deploy applications that require more than one container per site ● Integrate the GEE with a wide-area storage infrastructure such as the Syndicate file system or the Intelligent Data Movement System. ● Deploy a mutli-tenant web services platform, perhaps based on Lively, which permits experimenters to share perennially-scarce public-facing IP addresses (insert gnashing of teeth here; this is an entirely artificial shortage). This is a requirement for a lot of the applications I want to build personally -- people need to be able to access distributed services. ● Make the platform straddle multiple infrastructures. Early targets include Canada's GENI-like Smart Applications on Virtual Infrastructure (SAVI) platform, and the National Science Foundation's Chameleon cloud infrastructure.
Chief Scientist, US Ignite US Ignite is a non-profit 501c(3) foundation designed to seed the next generation of network applications in the public interest. There are six primary areas of interest for Ignite: education, health care, public safety, transportation, energy, and advanced manufacturing. As Chief Scientist of Ignite, I pull together teams to deploy advanced applications across advanced networking and computing infrastructures, notably the GENI infrastructure above. Recent activities have involved deploying LockheedMartin/DoD ADL's Mars Game and the Virtual Worlds Framework on GENI, and developing and deploying the Lively-based Ignite Distributed Collaborative Scientific Visualization System. See below. With the GENI Project Office, the University of Toronto, and the University of Victoria, I'm working to federate the GENI and SAVI infrastructures. When this project is completed (we should have a first version up and running in June) people will be able to create VMs on virtual networks spanning North America.
The Ignite Distributed Collaborative Scientific Visualization System Many scientific projects involve visualizations of very large data sets. For this reason, the NSF and other agencies have developed a number of high-end visualization systems, notably the CAVE and the OptIPuter. Since experts are distributed and want to interact with the visualization and with each other through the data, a number of these systems have been designed for collaborative interaction, featuring very high-bandwidth specialized networks. These systems and the networks that support them are large and expensive; it doesn't come to your desktop, much less to your hand. The Ignite Distributed Collaborative Scientific Visualization System is a web-based interactive collaborative visualization system designed to be deployed on any device, and to interact with the user seamlessly, with data sets of any size. The combination of small devices with big data implies the need for a server infrastructure to render the data for the user; the requirement for seamless interaction forces a server to be near to any user. And in a collaborative system, the users can be anywhere; hence the servers must be everywhere. The Ignite Distributed Collaborative Scientific Visualization System features a network of data servers with replicated data and computation at a number of sites throughout the United States, Western Europe and Japan, with visualizations served on identical Lively pages. We use Lively-2-Lively as an inter-page messaging system, which permits viewers of each page to see the actions of other users: a user pans across a map in Tokyo, and the map in Washington moves. Only view-change events are sent across the wire, with a specification of the view to be displayed in the remote location. The Ignite Visualization System provides seamless interactionand immediate updates even under heavy load and when users are widely separated: the design goal was to fetch a data set consisting of 30,000 points from a server and render it within 150 milliseconds, for a user anywhere in the world, and reflect changes made by a user in one location to all other users within a bound provided by network latency. The system was demonstrated succesfully on a significant worldwide air pollution data set, with pollutionvalues on a 10km, 25km, 50km, and 100km worldwide grid, with monthly values over an 18-year period. It was demonstrated on a wide variety of clients, including laptop, tablet, and smartphone. We demonstrated it with eight users simultaneously connected and manipulating the map, with sites in Tokyo, Victoria, San Francisco, Washington, Ghent, and Potsdam. This is a joint project of UVic, HPI, UT-Dallas, and CDG. Our plans are to extend this to other types of scientific data sets and visualizations.
YALPS: Yet Another Lively Presentation System Presentation systems are the "Hello, World" of Lively. So of course I had to do one. Look, this is the lowest-hanging fruit for a killer app for Lively. PP and Keynote suck; Google Slides is OK, kinda, sorta. Prezi is acceptable. Lively ought to be great; everything is inherently on the web, we can pull in any content we can access, every object is live....and Morphic removes a lot of the clutter that a global UI for every object brings. I really didn't get how big a thing Morphic really was till after I looked at Impress after having used Lively -- what do all those little icons do? YALPS is a descendant of El Profesore: we did a re-implementation, but that's where we took our initial inspiration. A few thinngs we added: ● The Builder cuts out a lot of the noise from heavily-animated slides ● We added an external Slide Sorter ● Publish individual slides to a special PartsBin ● Adapted the remote-control work from the visualizer (see above) to permit remote control of presentations over the web. This is a joint project of UVic and CDG. Critiques (we've had plenty already) always welcome
Other Stuff I'll be working with Ken, Marko, and Dan on the CDM project. A couple of my UVic students will be providing onsite support I keep building dashboards and monitoring infrastructures for systems in Lively. I'm working with Martin Swany of IU to add a Lively dashboard to perfSonar, a monitoring infrastructure for the high-performance systems community We taught Lively for a WebDev class at UVic last fall, and may do it again this fall. I'm working with Joe Mambretti of Northwestern to build a 10 Gb/s nationwide network for us, the Advanced Media Collaboration Network. If SAP IT can get us a tail circuit to 200 Paul in SF and to the UCLA campus, we'll have 10 Gb nationwide...