Friday, May 16, 2014

Functional Federated Modeling – This is What the Internet Could Be!

Leading Note: There are dozens of  amazing ways to scale massively multi-user simulations using clever network technologies. We could create virtual worlds and distributed data hives beyond imagining. However, all of this is threatened by the FCC’s decision to allow internet fast lanes. Progress will be set back decades or shipped overseas, killing net-neutrality would impair American economic growth, national security, and  national science. We are on the precipice of transforming the internet into its full potential, and ISPs would prevent this to protect broadcast media, sinking the American Future with it.

Building Bigger Virtual Worlds

In order to serialize and efficiently model complex simulations, computation needs to be broken down into several parts. Parallelization is nothing new, and neither is the concept of assigning functional components to simulations over a network. What varies from project to project is how this division takes place, and what defines the purpose of the sub-components. Firstly, I’m going to summarize to the best of my ability, the manner in which distributed simulation architecture currently works. I will also speculate on how these somewhat separate concepts could be utilized in such a manner as to vastly expand the capabilities of this type of simulation network.

The purpose of my overview stems from the depth of study which I have undertaken independently since the winter of 2012. I started the /r/Simulate community, which pooled resourced and started on a very compelling coding initiative. The work done by my cohort cannot be understated, I am blown away the level of thought which Aaron Santos put into the MetaSim API. While I never have been anywhere near as good of a developer as Aaron and team, my investigation has instead been focused on become a subject matter expert for all technologies or strategies revolving around distributed simulation.

This lead to a contractual role with Magnetar Games, from which I learned magnitudes of new concepts from its CEO, Duncan Suttles. His expertise in simulation architecture spans decades, ranging from the inner-working of a run-time rendering client, to the layers of protocol required to have resource exchange between parts. From the beginning, his group has been working with Department of Defense standards such as SEDRIS, and pioneering its XML implementation with the additional requirements of High Level Architecture (HLA). The intended plan of action for the Magnetar Team is to integrate these simulation schema standards into the fabric of a distributed run-time infrastructure, which would eventually be used to describe scenes in popular virtual world server systems.

So much intellectual labor has gone into the design of simulation mechanisms which make maximum use of systems in the most efficient ways possible. The exciting new element of our current paradigm, however, is the rising popularity of in-browser graphics. Whereas a browser with WebGL is not at all required to construct a distributed simulation on the web, it does add a massive gain in the ability to recycle domain elements such as CSS to quickly create interactive components. Moreover, it adds an ease of integration for other web services. There is some debate as to whether the king of WebGL will be native-JS or LVVM; however, I believe both offer solutions to different problems.

Ultimately though, WebGL simply is the icing on the cake, the biggest highlight of progress will come in our ability to generate procedural content across massively parallel systems in real time. By segmenting aspects of simulation, utilizing grids, and defining flexible evolving standards, the world will be introduced to simulations of persistently greater complexity. Not only will this complexity boost exist in graphics or immersive peripherals, but also in the resource available for neural networks, vast agent systems and more believable procedural narratives. The holy grail of simulation isn’t just a better physics model, it’s also creating the best holistic model possible. The only way to predict the unexpected is to remove the boundary conditions of your model, the value gains for doing this will be tremendous.

 

Established Simulation Network Designs

Run Time Infrastructure & the FOM

The concept known as “Run-Time Infrastructure” or RTI, is a form of middleware which is used to handle segmented components of a simulation called “federates.” An RTI is a design principle which describes a system deployment, there’s no single way to implement one. It’s a specification and has no requirements regarding network protocol, object storage, or programming language. Modern implementations typically need to compile with the current IEEE standards or HLA specifications which I’ll touch on in the next section. The most basic concept to grasp about an RTI though, is that its acts as the “live” component of a simulation, handling all of the dependent parts and generally managing the simulation pieces that the application client will use. The RTI talks to humans.

1000px-RTI.svg
“Below” the RTI, you find  a collection of federates which are treated in a relational-entity type system called a “Federated Object Model” or FOM. These federated components are each reusable and generally are split up according to function. An example might be the image below showing an RTI designed by NASA. One component of the federation would model the controls or occupants of a spacecraft. Another module would keep track of the orbital position, speed, and Keplerian elements of the spacecraft’s orbit. Since the craft is designed to measure the Earth’s magnetic field, you would also have a federate which models the size, shape, and geomagnetism of the Earth.

Now, not all If you wanted to, you could even split the Earth federate into smaller sub-federates and smaller API chunks. In this example, they have some service or worker pulling Magnetic Field Data from some stored source or external pipeline. Historically, most of the federates within an RTI need to be aware of each other at compile time, but differing approaches have multiple solutions to this issue. Partial federates and ad-hoc federates are one solution I cover later. The fundamentals to be remembered about an RTI are that nodes perform functions and networks of these nodes communicate to the client.

 

RTI-Nasa

High Level Architecture (HLA) & Distributed Interactive Simulation (DIS)

Both HLA and DIS are Department of Defense (DoD) DARPA sponsored initiatives which are now maintained by the IEEE standards organization. The difference between the two exist at the protocol level, within the messaging layer. DIS requires use of the UDP channel whereas HLA simply offers a packet specification and is agnostic to the transport.

However, despite finding a million papers characterizing the use cases and overall abstract concepts behind HLA, it is difficult to find any functional examples of an HLA RTI deployment. The best I could find was a white paper on the “Umbra” system defined by Sandia labs.

From what I’ve managed to gather, when an RTI uses an “ambassador” that functions as a web worker or timed function that invokes function or network calls between the federates. This occurs at regular intervals, and works to connect objects to an ambassador that connects to the RTI, which connects to the clients.

HLA-example HLA-example2

Now, from a game or web developers standpoint, I’m not entirely convinced that these standards need to be implemented across the web. Firstly, it is not free to use these standards. HLA compliance is a certification scheme run by the SISO and the US Military, which is great for well-backed defense contractors but terrible for small open source teams trying to model spacecraft missions like I was in the NASA SpaceAppChallenge.

Either the SISO should release a free-tier version of HLA rules, and a pay to use military version… Or a free alternative should be constructed which is open access to everyone and doesn’t require certification. You can buy access here. We live in the era of social coding and open collaborative projects, the best way to promote interoperability is to make standards free and easy to use.

Data Distribution Service (DDS)

HLA isn’t the only available standard for distributed data. Both offer the “publish-subscribe” architecture with decentralized communication. This “middleware” layer is a translator and routing service which can send essentially two types of messages:

  • Signals - “Best-Effot” continuously changing data. (Sensor readings for example)
  • Streams - Snapshots of data-object values which update states (which are attributes or modes of the data)

The difference here with DDS though, is that it has a quality of service (QoS) call built in to ensure that messages are transported to destination. HLA still has this, but with DDS the QoS can be switched between a few different modes. Moreover, DDS is built around the idea of the keyed topic, for building custom data constructs. While HLA is centered around the Object Model Template (OMT), DDS focuses on the Data Local Reconstruction Layer (DLRL) and the Data-Centric-Publish-Subscribe (DCPS).

So HLA is based on defining object hierarchies, DDS is built for structuring more abstract data configurations. Both are built to create distributed models across a network using an RTI.

DCPS

http://ift.tt/1oyd11A

 

Scene Building

Static Asset Conversion

Rather than keeping all the assets in the run time format, it is easier to store objects into a passable storage format. This has been the common approach for most systems for nearly every solution in the post 1 GB memory era. However, not all asset formats are created equal, and all need some form of special consideration top be used in a run time environment. Here is the implementation of static assets converted into a usable graphics library format.

glTF

 

Distributed Scene Graph (DSG)

The DSG is an effort by Intel to create dynamic scene components which have a scheduled sync, and then send bundled assets to a client manager which also renders the scene. Or, it handles the scene and pushes an update to the client to render the scene. Moreover, this system is built to scale for thousands of users. It manages a client directory through a set of networked hosting servers.

To some extent, this becomes dependent on the middleware which is the scene update propagation service. However, by chunking out the scene components by scene functional components and by region, and having system states updated between concurrent client managers, you are able to serve a lot of clients really quickly with the same data.

DSGArch

http://ift.tt/1oycZa3

Modern FOMs & Node Networks

So enter now the experimental and iteratively more complex types of distributed simulation networks. What HLA and DDS offer are tried and true schools of thought on building systems that need to work on a patchy, high latency network. They were conceived before broadband was widely available, so since then, a lot of new approaches to slicing up simulation pieces of the cake have been built.

Partial Federations

One notion which I am partial to, pun intended, is the idea of partial federations. Initially with a FOM, all of the chunks need to be declared in the beginning (usually compilation) so that all of the different parts are able to talk to one another. What some people have speculated on however, is the concept of modularized federates which offer functional components only as required.

This means that you might start with a base layer of core pieces which are needed for the RTI to run at all. On top of that you have minimum required modules which would have the base language to host secondary “stacked” components. Think about this from the perspective of automata thinking. You define a base language or compiler, than simply have modules which can be loaded or discarded based on need.

This maximizes compute efficiency and frees up memory to handle dynamic data instead of the data model. Here’s a cool diagram showing these dependency stacks as lego blocks:

dependantFOMs

http://ift.tt/1nXRGw0

RESTful System to Build an RTI

HLA & DDS are for the most part agnostic to the transport itself. They simply define what should be transported. Any FOM can use UDP, TCP, or whatever else works best. In the case of the modern web, the king of the application layer is HTTP. Asynchronous apps can be built using TCP web sockets. This is the basis for nearly all of the web APIs in use, and it has already been extended to be used in an RTI countless times. This is even is the approach used by the rSimulate MetaSim platform.

There is a fully developed product out there called WebLVC which is an out of the box RESTful RTI that deploys HLA. It’s commercially available, mostly marketed to the defense

webLVC

Future Architectures

Distributed Autonomous Ad-Hoc Grids

Firstly, in order to create truly astounding grids of the future, we are going to need much more cloud resource freely available and cheap to comeby. Cryptocurrencies with Grid Computing built in might offer the best solution to this challenge. Having readily available grid computing or cluster systems available based on need, would do wonders for creating more intricately rich simulations.

autonomicity

http://ift.tt/1nXREEw

Nested Model Levels

Taking the principles of a nodal federate system, it is possible to treat each member of the federate as it’s own independent set of sub-federates. One node in the simulation manager could be weather modeling, which could in turn have its own associative grid for massively parallel computation. So long as the time synchronicity between the overall node doesn’t need constant in-memory updates from the dependent grid, you are clear to create as many grid federates as you can.

Now, if you split a nested simulation with independent grid nodes, then you would become bottle-necked by the centralized node to relay state changes to the subsequent consumer grid parts. However, if in advance, you have a language of communication between sub-node components in different master-federates, you could arrange for faster communication between sub-node pieces.

For example, you might have a massive grid simulating weather and another one simulating wildlife. Rather than having a message bottleneck between a centralized-state weather model and a centralized-state plant model, you could partition each grid by geography, and then have those two partitions exist on the same local memory.

Grids Doing Different Things as Needed with Layered Abstraction

Consider the Copenhagen interpretation of quantum mechanics. Nothing events occur unless an observer is present. However, we can predict the frequency of events and thereby can do stochastic modeling in leveled tiers. Nobody hears the tree fall in the forest, but we can estimate how many trees fell in the last year. We can take guesses at which areas lost the most trees because we have another model that describes wind patterns on the mountain in question.

Or take another example: the street of Venice. There are 20 million tourists who visit there each year. Do you need to model every single person on those streets, right down to their neural network to determine their actions? You could, but it’s a lot lest cost-intensive to simply model their behavior probabilistically. Use an agent model to simulate people’s motions. Maybe give them a simple finite state machine and have that live in memory as part of the federate. You can predict where they will go based on what needs they currently require most.

venice2

Now of course, you can’t have a conversation with an FSM that thinks like an ant. For this, we need a “model-transition.” This quick swap between the FSM and a real neural network would give you the feeling of immersion among real people, without expending the cost of requiring real people. The idea would be that you have an AI that plays an actor for every person on the streets your character interacts with, or comes into proximity of. This type of federate switching could give tremendous gains in realism with marginal increase on CPU time. I call it “Probabilistic Entangled Agents” or PEAs, but I’ll have to ellaborate more on this concept at a later time.

RTI with Intelligent Package Distribution

The strength of NodeJS or Python PIP, or even .NET nuget,  is the ability to list dependencies and versions, but also the ability to quickly install them. What is needed in an idealy something like this written into the DNA of the wider RTI infrastructure. Docker.io and Vagrant offer VM imaging and custom deployments which have prerequistites already installed and configured. Even using Amazon EC2 lets you pick a custom image to be deployed based on need.

Moreover, most of these VPS image services also have service APIs that let you deploy servers with just a RESTful call. The future of an autonomous scalable simulation would be an RTI that has a nuclear federate seed written at the operating stack level. It becomes ubiquitous to operating systems, or at least very simple to install. You bundle this with a set of value attribution stacks like Gridcoin, and then have a set of install-able dependencies which can deployed on an as-needed basis.

This might be created as another API on top of REST, it might even be it’s own protocol channel, but ideally, you could convert free assets of the internet to be reconfigurable to demand as requested by a universe of services. These services would trigger automatic dependency services, and then scale in complexity in accordance to the value of investment being pumped in. Something like DAC’s and Ethereum might compliment this nicely.

The funnest use-case scenario is you strap on your VR goggles, and decide you want to spool up a virtual world to play a historical simulation game. You give it inputs on game duration, number of players, and level of immersion, it spits back a set of  tiered choices based on the desired complexity and the resource it takes to empower that complexity. Now, if instead of paying for the network traffic to a massively centralized ISP, you get to invest in a distributed network of git submodules and a multitude of cloud+grid components…. Well suddenly you are participating in resource distribution based on the creation of real value and user empowered content.

thedeep

The Resource Dilemma & the FCC

All of these monumental design paradigms offer novel ways to experience the internet and would create demand for the next significant push of Moore’s law. It would allow for holistic simulations which model everything from climate change to economic agent models with billions of independent simulated agents. We could test policy in virtuo before signing it into law. However, the capabilities of these ever expansive systems still depends on the computational and network resources available to it.

Imagine a single processor running the agent behaviors of a single complex agent, and then imagine the number of networked API calls to relay system state awareness to all dependent federates. Consider a billion grid services generating data at a constant rate, trying to communicate with a few hundred different federate models (wind patterns, air quality, crop health, soil conditions). You would have a very large chain of interdependent components which might need to send massive amounts of data between components. The same is true of the internet of things, which could be co-opted into a global awareness simulation model. This could be petabytes or exabytes of data between

So here we have a conflict with the vision of the future as laid out by Comcast lobbyists. The internet is in its biggest transitional period to date. We finally are reaching the point where technology is allowing a site to become a “place.” Technologies such as a federated RTI or a  massive parallel grid offer the potential to turn our dumb apps into a series of intelligent components each fulfilling an important function. However, if the FCC does not rule the internet as a common carrier telecom service, the cost of communications will sky rocket and it will set-back our ability to evolve the internet. Other countries will fill the void and America will become a technological backwater where only consumption driven applications have the money to pay for the traffic between functional nodes.

It would be a world where we have the power to model the entire Earth through holistic simulation to fight climate change and global conflict, but instead find that the only data “bailiffs” with the financing to build these systems are marketeers and major online retailers trying to siphon every last drop of cash from a consumer base which has no surplus to spend. If data rates between remote web services are charged the same way that ala cart B2C rates would be charged by Comcast… We would find that not only that America’s future is desperately limited, we also would force all but the big players to fold.

The startup economy would shrivel within a few years and youth unemployment rates might resemble Spain by the end of the decade. If Tom Wheeler approves a tiered internet, his actions comprise nothing short of committing treason. His damages would make the impact of the NSA leaks look like pocket change, and would potentially compromise or magnify the costs of hundreds of projects currently curated by the Defense Department. Making the non-commercial internet more expensive rather than less would grind our economy to a halt and threaten national security.

Will the future of the internet be one with massively distributed parallel simulations and emergent media? Or will the the future of the internet be the destruction of the worlds most useful invention to serve as a glorified cable television delivery system.

fcc



http://ift.tt/1oycYD1 http://ift.tt/1nXRFIx MetaSim, Project, Projects, rSimulate, Simulation, Simulations, Technology, Uncategorized, architecture, cluster, CPU, cryptocurrency, DAC, Darpa, distributed, DoD, DSG, FCC, federate, FOM, future, gaming, gltf, graphics, grid, HLA, middleware, network design, occulus, opensim, parallel, RTI, SEDRIS, simulation, technology, VR, web, webgl

No comments:

Post a Comment