Saturday, June 8, 2013

SOSU 2013 Part II: Technology is Pushing us Create Digital Alternate Realities

Answering the Last Question

The idea of the simulated parallel universe, or “ancestor simulations” is nothing new. In fact, one of Isaac Asimov’s psychohistory captures pretty well the concept of a simulated cognitive reality. In one of greatest stories, “The Last Question,” where the “Multivac” computer witnessing the entirety of history is asked how to stop and reverse the heatdeath of the universe by entropy. The computer finally succeeds by creating a new universe altogether.

Illustration of Asimov's Multivac

Illustration of Asimov’s Multivac

With science fiction providing such an enormous wealth of inspiration, it’s no surprise that so many different groups are seeking similar objectives. However, what I’m beginning to learn is that each venture has its own approach, and I believe that the best approach will be the cumulative one. Via open-source culture, experimentation and collaboration, we will together create a new frontier.

In Part I of my statement, I described the small projects we learned from and how that drove us to begin a proper codebase. We began with tinkering in a WebGL three.js scene, which was augmented by Aaron’s work on the API. In Part II I detail out all of the top similarly focused organizational projects. Some pieces from part I are so complex that they could easily fit here, like Procworld, but instead I’m focusing on larger group projects.

What’s really interesting is that there is no clear boundary in quality or scope between large scale professional projects, small hobbyist teams, or even individuals. What is consistent though is the fact that there are so few related open source projects. Most people are too motivated by money to share code, but I think that open source business models will ultimately be more competitive. Which I’ll discuss in length later.

From this pool of projects, a few groups might become direct competition, others are potential collaborators. I’ve made contact with some of these groups, and found a generally positive attitude about what is to come. What would be really fun is a conference of some sort, attended purely by people dedicated to the singular objective of world simulation and generation.

Imagine an objective, open source super-project with all of the parties I describe here. We could create a cutting edge web delivery 3D renderer platform. Then load it with scientific grade environment simulation modules distribute across cloud services and idle processors, and finally emulate the human mind and contextual knowledge with the best of cognitive software. We’re a lot closer to simulated reality than many realize, what it’s going to take is an open source movement to combine every sector into one holistic experience.

Then comes the challenging part, redefining economics to work for a world where everything is simulated and automated. We live in an era where personal “hobby” projects are more disruptive and productive than our day jobs and that is something that should make us take some serious consideration of how currency and finance works. So let’s leverage the power of networks and pursue a better future through simulation!

Most Similar Large Scope Projects

There is already a movement towards this style of gameplay, and it’s been living in the imagination of gamers and simulation enthusiasts alike. Nobody enjoys watching their favorite series come to an end, or beating their game and running out of quests to fulfill. Nobody enjoys redundant gameplay or sandbox games with limited or restrictive mechanics. The reason so many people are drawn to game modding communities is for the perpetually expanding realm of experiences created among their peers.

The problem with modding is that for most cases, the game engine source is not open, and so the environment is structured to whatever the engine is capable of. What my collaborators and I have come came to realize as the MetaSim project developed was that there isn’t a need for an all expansive game, but instead a platform with easy to configure parts that are open source and thus infinitely “modable.” This in my opinion would be the only way to incorporate academic level science modeling with on-the-fly game asset handling. The business we are considering would operate on a market and services economy rather than a consumption and direct to consumer goods system. Like most aspects of our project however, creating a sustainable plan still needs time, and I will discuss this more in a later post.

That said, there is a spectrum of similarly scoped projects with differing ideas on how best to implement the best open-ended experience. There is also a wave of experimentalism in terms of  project funding and delivery presentation. We are in an awkward phase where large companies with restrictive DRM can botch a product release and self-censor their detractors, while small indepedent teams like Mojang can make a blockbuster game and deliver without problem by purposefully releasing the beta in an unfinished state. Or teams can publish through a third party like Steam. Teams can build their own engine or use an existing one based on whatever licensing is required, and then charge accordingly to access the product. Each case is somewhat different, and the experiences all have their own flavor and phased approach to holistic gameplay.

Universe Projects

The /r/Simulate community had only been around for about two months when that video was released that both terrified and electrified me. The video was about an endeavor called “The Universe Project.” Their video was very high production value and it really knocked the wind out of our sails. It seemed someone else already had significant headway on a very similar project. Since we had only speculation for our own project at that stage, I wondered if this meant that I’d have to abandon all of the work we’d already done.

Being afraid of the competition is silly though, since each project is different enough in approach I feel like there isn’t any need to get stuck in aggressive competition. People don’t have to pick one product exclusively either. For example, many people who purchased Minecraft also bought and play Terraria. So, aiming to be proactive about the situation, I pinged the Universe Projects Facebook page and had a conversation with Nik, the project founder.

It was a mutually amiable conversation, and Nik himself noted the biggest difference between our projects, AI. This difference between our projects conceptually stems from the approach to tackling the “enormous world” problem. 150 square kilometers of landmass is a lot of room for things to happen. His approach focuses on spawning players near one another whereas ours will be to aim to use AI agents to populate the planet. My reasoning for that approach is that everyone will want to play as a King or warrior, but if you look at history, most people alive pretty much spent their whole life farming or foraging. Granted, Farmville was ridiculously successful, but if we are aiming for historical realism, you don’t want to every person on the simulated planet to be played by a user with a twenty-first century perspective.

What is really fascinating though is that we both opted to use WebGL without discussing it at all. Long before U.P. announced the space game demo, we started working on our own three.js projct. In part I of this project I discussed our reasoning for this decision, but the short version is that there were too many awesome open source WebGL demos and libraries to just ignore the technology. After doing research and talking to my peers, it sunk in that content delivery on the web of full 3D content would be the future. Especially since it has become hardware accelerated!

Nik make a second video where he discusses the origins and technical decisions in his project. Here he lays out that their project intends to start with 2D HTML5 and eventually transition to 3D as the compatibility and support across platforms becomes possible. Next gen consoles, smart phones, tablets, all will very soon have the capacity to host hardware powered streaming Web3D content. However, I foresee conflict with established media service providers and the emerging class. Just take a look at forced service denials like Hulu and Google TV, or Youtube and Microsoft. The future of content will be platform independence and openware, but the world’s technology empires might stand in the way of progress instead of supporting it, which remains to be seen.

What I do find technically interesting about Nik’s project is their use of Atmosphere, which is a realtime data transport framework for JVMs, HTTP, JSONP and other extensions. I’m still too new enough to how messaging systems and I/O web processes work to fully appreciate the broad application cases for Atmosphere. U.P. also notes the use of Appengine and  Hypertable, which is based on Google’s Bigtable. Which is important because this data storage uses mapreduce and is a powerful structure for large data applications. Without volumizing in this way, big data cannot be handled at application performance levels. For example, this suburb of Paris alone requires an entire petabyte of data. For MetaSim, Aaron contructed an API and we’re using MongoDB as our scalable datastore. There’s no telling how permanent this might be as I’ve read negative things about the red write process, but the time being it works well with our node.js servercode.

I sincerely wish all of the best luck to Nik and Universe Projects. I think they have a great gameplan for the future of their product and I wish them the best of luck. I do worry that their reoccurring donation and referral program might be too aggressive, and might scare off would be donors, but with 3/4 of a  million visitors to their home page, reclusive donors will likely contribute once the project passes enough milestones to ensure its long term success. Despite our differences in approach, Nik and I both have been imagining this game for a long time. (My project from 2007). With any luck, we’ll both succeed and potentially work together someday. Or maybe we just start hosting together a conference at some point! :)

Zemerge

Imagine a simulation sophisticated enough to model our political and socioeconomic climate, influenced by simulated systems across all disciplines. There is no single vision I could agree with more than Tom Skazinski’s words:

“The Zemerge project advocates a paradigm shift in social change in an attempt to combine genetic algorithms and social simulations with the feedback from the environment and individuals to produce a continuous optimal emerging social schema.”

Skazinski has already created a rudimentary household modeling system, while lacking in visual or interactive components, it does its job as an assets manager, letting you see how individuals, businesses, banks and the government work together.

However, as you can gather from Skazinski’s site and Youtube channel, he has much more planned ahead. Skazinski understands the laws of Wolfram, that recursive systems progenate emergent higher order complexity. This is common ground with my own thought process and any scientist whom can seriously consider the simulation hypothesis.

Even if we are not occupants of a simulation, we will need to use complexity theory to create our model universes. Rather than defining the institutions of society like banks and taxes, we could ideally build the biological and social simulator abstract enough to build those social institutions on their own. We would host an agent based modeling system where each agent is either modeled stochastically or treated independently and rendered with a full neural network similar to Watson.

Skazinski’s project goals align absolutely 100% with my own. (I do also  have some scores to settle regarding the Fermi paradox though.) But the betterment of all humankind by means of holistic simulation… there is no better goal. Skazinski says it perfect himself:

“The goal of this project is the total integration of all simulation areas into a unified open source and perpetually evolving engine that holistically through algorithmic optimization and feedback loops is able to provide society the optimal sets of rules for social organization and direction.”

I reached out to Tom (seems there’s a lot of like-minded Tom’s out there!) and we had a lot in common! We both yearn for a “Manhattan project” style project to work on our interests full time, but of course the where and how (to afford it) still needs work. I’m still waiting to hear more about his upcoming secret project!

Together, I believe we can gamify the recursive improvement of human welfare! I’m looking forward to working with him in the future if we’re ever able to have that opportunity! Toms unite!

Existing Simulation,Gaming, and Engine Projects

Not all large projects have a scope as vast as MetaSim, Universe Projects or Zemerge. Early on, my collaborator Kevin Tinholt discussed several similar projects with common goals. As time progressed we found more and more, but at partial scope. The smaller projects I discussed in Part I. For the larger projcts, most are centered primarily around the procedural environment, not necessarily what happens within it. In my first post I mentioned Miguel Cepero’s procedural worlds as state of the art in terms of content generation. Commercial game engines still are far superior renderers, but his software can output to them.

In terms of building a renderer ourselves, I’m not sure how far we will ever get on our own. If we keep the project open source and attract enough developers to the cause, I could see three.js or something similar being extended into something phenomenal. A web powered game engine that works cross platform is what we want to create, and that means the terrain, the textures, the modeling and animation. It will require tons more assets than we have available today.

Right now we are mainly focused on the backend, the modular network protocol and the proof of concept in sending simulation data between servers such that the stress on the renderer can be unloaded. It’s up to new contributors to start building the assets and the reference engines we will need. With that in mind, we need to start looking at other projects for inspiration and guidance. How do they work, how does a smooth Voxel system translate to WebGL? How much do we buffer and stream remotely versus unload locally? There’s a lot of questions to be answered and a lot of room for new contributors to step up and help us make something truly phenomenal together! So let’s check out those other projects.

Specific Scope Indie Platforms

There are a few games outside the scope of mainstream publishers that are pushing for unique new experiences that can only be found in the experimental environment of small publishers. With the advent of Kickstarter, suddenly developers of all scales and experience have been able to finance their projects. The types of games emerging are the ones people really want to play. There is a democratic self-selection to the Kickstarter platform that allows for greater experimentation and thus more original gameplay!

In regards to the MetaSim project, some of these projects have similar aims? Chief among them is the ease at which procedural content is being incorporated into the gameplay mechanics. There is a whole host of voxel based platforms, each trying in its own way to atomize the wold in some way. While first-person perspective gaming seems to be doing really well, detailed strategy gaming at the scope and detail level of Civilization or Paradox seems to be none-existent. Thus, the first niche for MetaSim to tackle, and we can regress into other game/sim modes from there. If anyone from any of these projects wants to collaborate with us, email me, we have some work to do! :)

Voxel Based Platforms

Procworld

How can I not mention Procworld again? Miguel Cepero’s work is the most studied and focused smooth voxel environment engine around! I described his project in Part I but I just want to put it here to contrast it against other Voxel systems to show how much more sophisticated it really is! Here’s his newest video showing block unit palettes and non-cubic grid tiling. I had brief correspondence with Miguel back in December, and hope that I’ll be able to work with his codebase at some later date. I’d really love to see his render environment pushed to WebGL or some streaming service!

Upvoid

Newer and less developed thus far than Procworld but still very impressive! This natural voxel system has destructive terrain, dynamic trees, and stylized in game models. It uses some bad-ass tricks like Tensor decomposition and normalmapping tesselations! The game looks incredible, the devblog is informative, definately follow these guys!They describe their gameplay as “a mix of Garry’s Mod, Skyrim, Minecraft, and Dwarf Fortress.” YES! Now if only we could convince them to collaborate with us on porting it to WebGL instead of just openGL. Maybe for v2 :)

 

Infinite Pixels

Infinite Pixels is a voxel based game set in a procedural space environment. The author claims there will be a free roaming space setting with three solar systems, up to four planets, and over 40 moons. The objective is to stay alive, by extracting water and growing crops. There seems to be a disconnect between the boxy voxels of the buildings and the smooth polygons for all of the planets and moons.The key takeaway here is the scale, whole voxel planets with gravity baked in. While I don’t know if they use any LOD or frustum loading system, it may not be needed for the simplicity of the renderer. It’s a step in the right direction and it should be fun to play, get out there an support these guys!

 

Timber and Stone

Timber and Stone is essentially a Migration of Dwarf Fortress into a Minecraft style Voxel world. It allows you to control multiple units in an RPG style system where you direct each unit to perform actions, collect resources, build things. Created by a one man team, Robert Reed, it is promising to be a very robust system for voxel based exploration. Where it really excels over its competitors is the layered dungeon mode. It allows you to see into dungeons slice by slice as you venture deeper down. What would be really great though is if you could jump into the perspective of an individual to explore in first person mode too!

 

Stonehearth

Stonehearth is in the developers’ own words: “A sandbox strategy game with town building, crafting, and epic battles!” Very similar to Timber and Stone, but perhaps with more focus on the RTS elements and nation based combat. In this regard it is a departure from most of the other voxel games. Being RTS based, they will need to more heavily invest in the AI, and I am excited to see how that plays out. As their about page describes this system, “an AI “dungeon master” observes your behavior and tweaks the content based on your actions.” Wow!

 

Terrain Driven Projects

If you don’t restrict yourself to a voxel environment, there are projects focusing on realism at a whole new level. Human-level terrain on Earth that spans the entire surface is now possible, based on loading systems and procedural generation. SpaceEngine effectively uses real world data and procedural data to simulate the entire known universe! There’s some cool stuff, all being done in current gen graphics technology instead of with fancy ray tracing or atomized polygon queries.

Extrasolar

Exosolar by Lazy 8 Studios appears to have created a rover based experience for exploring Exosolar planets that utilizes static renders. Which allows them to have deeply detailed renders with believable geomorphology. For anyone who has ever played with Terragen or Vue before, you know just how detailed non-real-time engines can be. While this game is an ingenious exploit, it seems like real-time renderers are catching up pretty fast to this level of detail!

 

Limit Theory

“Fundamentally this game is about freedom, and my ultimate goal as I am developing it is to give you, the player, as much freedom as possible for interacting with these universes in a deep way.” Limit theory is a space exploration game with procedural planets, solar systems and ships. There is some form of yet-to-be revealed in game commerce, and fighting space pirates may be the central conflict of this sandbox experience. I’m blown away by the level of professional development that Josh Parnell has single-handedly been able to produce.

 

Space Engine

Space Engine 0.97 is perhaps the most captivating universe simulator released yet, and it’s available today! Similar to the outdated Celestia platform, it procedurally renders the entire universe. It combines real world astrometric data and known exoplanets with the rest of the universe generated as it is observed! The programming methods used here are phenomenal and scientifically sound! It is my ambition to someday mimic the capabilities of this engine in our WebGL format. Fortunately my background in astronomy should help on getting the detail in there. Bolometric magnitude anyone? Bremsstrahlung spectrum? I got your back, I’ll do a detailed astronomy programming post soon enough.

 

Proland

Proland is a research prototype being developed “for the real-time rendering of multi-resolution terrains (up to whole planets), the real-time management and edition of vector data (representing for instance roads or rivers), the rendering of atmosphere and clouds, the rendering and animation of the oceans, and the rendering of forests.” There is extensive documentation and it is open source.

 

Outerra

Similar to Proland, except that Outerra is not open source and has plans to become a game engine of some sort. Similar to Proland, the scale is incredible! You have data for the entire earth with seamless LOD transitions. It uses openGL, it’s fully multi-threaded, and it has some type of Chromium service to browser for integration with Google maps or similar applications. If our project ever had to adopt a third-party deskside engine, this one might be in consideration. Although honestly I’d love to see this built into an industry grade engine, possibly Unreal 4 when it comes out!

 

Infinity: The Quest for Earth

Infinity: The Quest for Earth has been on the development backburner since 2006. To many it feels like vaporware, but there has been involvement of the alpha by the fan community, and screenshots have been surfacing. It’s been in the shop for a long time and it’s being out-competed by individuals like Josh Parnell. Still, it inspired others to pursue the same goal, showed it was possible, and may have more comprehensive gameplay mechanisms planned than others in the genre. Of course, it will have to compete with Star Citizen which has racked up nearly $10 million dollars in crowdsourcing and has some industry veterans pushing development. But Star Citizen isn’t fully procedural! :)

 

Starforge

StarForge is one of the most exciting projects currently in development for the Unity engine. Being developed by CodeHatch in Edmonton, this exciting young group is going to deliver possibly the most unique FPS ever created! What is exciting to MetaSim however is the stylized voxel system and procedural terrain they’ve built into Unity. I’m not sure they grasp the full potential of a system like that. With some effort, they could create tons of different games and genres on top of their existing Unity work. They are perhaps one of my favorite projects to watch and I definitely will revisit their work again soon!

 

Industry Benchmark

Of course this all needs to be benchmarked against the cutting edge in industry. Here’s the best in static and dynamic renders as a comparison.

Realism in 3DStudio Max Realism in live renders

 

Society and Strategy Focused Game Projects

Some of the projects listed throughout the Voxel and Terrain platform could easily be listed here. Stonehearth for example is very strategy driven, but as a voxel game it belonged up there. These are the few scattered projects that don’t fit neatly into the terrain category directly.

EndCiv/PreCiv

The German group Eyecystudio has two games in the works, one is called Endciv and focuses on rebuilding society in a post-apocolyptic world. It’s a global simulation with what appears to be an in game environment show at the RTS scale. The other is Preciv, which uses the same engine and will instead model the spread of a global pandemic. The overall global engine looks great, and the integration of multiple scaled gameplay seems pretty sophisticated for a project of this size. It reminds me a lot of the Superpower franchise, which was way ahead of its time, and has not had its core mechanics replicated since. (Maybe Defcon counts, but with significantly less depth.)

Preciv pandemic simulator Air nodes in Europe The whole world Global view RTS level view Eroding coastlines in Endciv

Predestination

Predestination is a space 4X game that tries to have elements of a hexagonal sphere for planet colonization. It also has a flat grid turn-based system for space combat. Ultimately though, it’s procedural galaxy colonization and I think that’s awesome. What I think is really cool is the open style in which they are developing. Some might see fan involvement in the design process as a naive way to get extra resources at no cost; what is really going on though is democratization of experience design. Paradox does this to some extent already and it’s what has made them wildly successful. I’m really excited to see what these guys come up with!

 

Biology and Climate Focused Game Projects

This category of simulation is almost all academic in nature. Most biology focused software projects have to do with population modeling and health. I still haven’t done enough research on those models either, but for fun you can check out Karl Sims’ evolving creatures! If anybody else knows a lot about biology simulators, post away! That said, here’s a few gaming projects I wanted to focus on.

 

Species: Artificial Life, Real Evolution

Species is a game that is trying to model dynamic evolution of creatures from the ground up. Using simulated biochemistry and physiology to model wild life and a food-chain ecology. There is a timescale, Clade diagrams, real evolution being modeled and tried! Despite their IndieGoGo campaign falling through, the dev “Qu” has perservered and is delivering fresh content and blog posts routinely.

His progress is tantalizing, I really hope that aspects of it can be ported to other engines. Something like what he has, a generation or two from now with real physics… That’s what I want. I want to put that into MetaSim and let things run wild! Seriously, in regards to simulating a full biome, it might be easier to invent procedural evolution than to try and balance everything at an abstract level.

 

Thrive

Thrive claims to be a Spore Alternative, they have been prototyping for some time, and are just finally reaching the first development cycle. What they are starting with is an editor for the cellular stage. Their group is offering enormous potential, but they face a lot of technical challenges (which we all do) just in getting the right codebase started.

The scope of their end product is massive, they aim to be one of the most holistic attempts at creating what Spore was supposed to be. Similar to us, they have taken up their own sub-reddit and have a facebook page. I hope that at somepoint they consider porting to a web platform with us. If they worked together with the Species developer too, perhaps something truly unique could emerge! Rather than having three teams try to recreate Spore-like mechanics from start to finish, a group modular approach could cut down work time and pool resources for cross team skill learning opportunities.

Either way, I’m stoked to have these guys doing something so great and keeping it close to the community like we are attempting to do. Keep it up guys!

 

AI, Agents and Neural Networks

The greatest destructible smooth voxel environment in the world won’t mean anything if we can’t fill it with compelling characters and believable fauna. For that reason, /r/Simulate has served as a collective brain dump for all things artificial intelligence. Simulating the human mind or general intelligence is the holy grail of computer science. Our projects likely won’t pioneer the creation of any AGI, we’ll leave that to the professionals. What we are interested in is the application! We are looking for the platforms which are the most robust, the most modular, and the most open.

In part I I mentioned some of the one person projects like RaveAI or Tom Barbalet’s Noble Ape. This time, I’m focused on more general purpose projects being worked on by organizations.

Neural Networks

Specifically artificial neural networks, these constructs are a multiplicity of layered state machine node systems where each node represents an artificial neuron. A set of nodes depicts a net, and each net might have some function. A group of nets work together to determine which functions to implement, and can create feedback loops to reinforce certain system states.

Neural nets will be the single most important aspect of the MetaSim project, and we have barely even touched them. Since we have a specific application in mind, we wanted to instead focus on the environment first. As it turns out, the environment and sensory feedback response systems are paramount in creating AGI. There is a schism in the field about whether or not embodiment is required for general intelligence. My modest speculation is that AGI can be good in a lot of abstract environments, but that human-like AGI will need human-like embodiment or simulation.

While there are plenty of great systems and projects out there, I’d like to focus on simply two of them, Encog and Opencog. Both have their distinct advantages and application, and I hope that in due time both are tried out in any simulated environments our meta project comes up with. Who knows, maybe benchmark each system by seeing how long it takes the agents to do certain tasks like building shelters or discovering agriculture. Or maybe just pit each system against each other in Gladiatorial combat?

Encog Project

The Encog project was created by Jeff Heaton who also writes on the subject. I’ll have to pickup some of his stuff, it covers everything from HTTP bots to network theory, and also explains how to use Encog. Each of his books is split into both the C#/JAVA version. Encog has it’s source available and offers a great starter and examples library for jumping right in. Plus his Youtube channel has a lot of educational resources.

 

Opencog

OpenCog is pioneered by famed Ben Goertzel as a contributor, integrating his DeSTIN sensory models into the framework. Their goal is to reach human level AGI by the end of 2021. Essentially they are building a platforming mechanism to create and categorize “novelties” based on sensory patterns. A combination of Bayesian inference and unsupervised learning, this type of deep learning is the bleeding edge of AGI. Google is doing comparable with their deep learning in x-labs.

 

Agent Based Modeling

There is a LOT of ABM software available. Such a large number of available tools means more experimentation and by consequence, perpetual improvement. I have listed here what seems to be the cutting edge and most actively developed projects, I could be wrong as I am still very fresh to the technology and appreciate any incite from the experts! Here’s my take on what ABM is good, how some of it works, and how we could apply this to MetaSim!

 

Flame

Flame is one of the most robust Agent modeling systems to date. It’s application can be anything from living tissue to crowds in a mall, and the scalability of it means it can run on a laptop or an HPC supercomputer. In the recent redesign optimized for parallel computing, making the software truly large scale. One weakness admitted by the developers in the inability to handle semantics and that a better non-broadcast messaging system between agents could streamline things further.

Plans to beef up a reusable object oriented design will eventually make Flame able to run multiple “coupled simulations” much like what we have in mind for our own project. This will rely on Advanced Messaging Queing Protocol, which is middleware that operates at the wire level instead of at the API level. Thus it can have a broader level of acceptance instead of being more proprietary. The complex network architecture being considered for FLAME will distinguish it significantly from other ABM software, we may have to consider using this in conjunction with whatever neural network tools are most supported by it. Then send messaging to our renderer server of course, for easy internet delivery of the content!

Breadth of Flame capability

Pedestrian simulation

 

NetLogo

Founded by Dr. Uri J. Wilensky of Northwestern University, it specializes in quick to launch visual agent systems. This JVM based language is derived from the earlier language “Logo,” a Lisp offshoot. Primarily it has been used as a teaching tool, a gateway drug to bigger, more generalized ABM tools, but its applications can be used in a lot of sociology settings or for species population modeling. A project called HubNet uses participation data to teach the tragedy of the commons.

 

Sugarscape/Ascape

Sugarscape is a JVM built simple agent modeler that was built as part of Joshua Epstein and Robert Axtell’s Growing Artificial Societies: Social Science from the Bottom Up. This helped them model population and resource dynamics in the Anasazi people of the ancient southwest. Ascape is the core language to which Sugarscape is a library extension. Other deviations branched from this, including MASON, Mathematica plugins, and “Sugarscape on steroids,” a version optimized to parallel computing on clusters.

GAMA

GAMA integrates agent based modeling with real world GIS data. It is being developed by French and Vietmanese teams that are part of UMMISCO, a French modeling and simulaiton organization. This application has it’s own scaling system for large numbers of agents, libraries of primatives, batch tools, and a UI based from the Eclipse IDE.

 

Repast

Open source and with several language flavors for HPC, the Repast Suite includes a few key components, mainly the ReLogo tool. It’s one of the few ABM tools I’ve actually been able to experiment with, I just jumped right in and built the zombie simulator that they show you how to make. Their image based render system is by no means the limit of what can be run, but it offers a quick way to scribble complex systems and just see what comes out.

It already has libraries built for genetic algorithms, and can perform a whole host of other useful operations including neural networks, regression, integration to existing java projects, GIS and event scheduling. There are a lot of wizards for ease of use and many different ways to visually program these systems.

 

Semantic and Contextual

ConceptNet

This project is run by the MIT Media Lab, and is part of the Open Mind Common Sense project, the brain child of Marvin Minsky, and it’s open source! It uses a Python package to interface with a web API, and a specialized URI hierarchy . The technology utilizes hypergraph sets to build the associations necessary.

In order to acquire knowledge to be implemented by the hypergraph, the API interfaces with DBPedia, ReVerb which mine Wikipedia for basic knowledge, and by consequence it becomes a powerful tool for language translation. What I envision something like this being useful for is modeling the linguistics of a procedurally generated society, and then translating to English for consumption within gaming and digital anthropology. Since it’s already websocketed it might fit smoothly into our own API or planned future API.

Freebase/Mindpixel

Mindpixel is taking the hard approach to AGI, by manually compiling human true false statements. Chris McKinstry saw that hardcoded common sense model could be applied commercially for a variety of clients, and thus pursued the project. Eventually this transformed into Freebase, which operates under an open license. With half a billion facts, and nearly 40 million topics, it probably is the largest structured schema system in terms of complexity of a single system. It even has a great in-browser query system that lets you search for metadata and it spits out JSON results.

Where this could really add value to a MetaSim project is if it were possible to have an open source equation modeling and linked variable system. Such that if you search for types of clouds, there’s a set of differential equations and hydrodynamic functions explaining why a Cirrostratus cloud looks like it does. Ultimately we would want to have a universal equation for clouds, but then match the statistical results to similarly constructed equation derived results! This also may prove useful for generating a probabilistic technology tree, but I’ll cover that in Part III.

 

Experimental AI

Entropica

Entropica project is mostly AI-focused, but it has broad application for social systems. Developed by Dr. Alexander Wissner-Gross, this software attempts to use “entropic forces” to model human intelligence. He challenges us to think about intelligence in an entirely new way. I’m not sure how this differs from a Boltzmann machine neural network, but I am hoping to figure that out as I learn more. For those interested, some context is provided in Alexander’s paper titled “Causal Entropic Forces.”

 

Cutting Edge and Peripherals

Not all advances in simulation are content or a polygon rastering. There are a few experimental systems out there that push the limit of what we think can be done in regards to render systems. Visual stuff includes intelligent LOD algorithms, ray-tracing, file-systems emulating memory; external and hardware includes full immersion and motion capture. Not that simulation requires immersion to be accurate or complete, but it’s a nice touch!

 

GeoSpatial

Euclideon

Euclideon claims to have invented an indexing method that uses some type of file system for faster than industry point caching. However, they spend more time hyping their superiority than explaining the technical side. What they offer seems to be best suited for the geospatial industry than for gaming. None of their videos has yet to depict any type of animation. Of course myself and any other detractors may be wrong, it’s just that their marketing campaign is rather aggressive and might be sending signals they don’t intend to.

 

Atomontage

Similar to Euclideon, Atomontage also offers “atomized” volumetric data. It is also similar to Procworld, better and worse in a few different regards. One distinct difference is the fully destructible terrain. What is also interesting is that they claim the engine “engine features a simple AI-based controller that is responsible for making all the modules perform the best way possible.” I am curious as to how this AI works and is scaled. I wonder how it or something like it could operate across a large network with some MapReduced data as an interface.

 

Ray-Tracing

Brigade

Ray-Tracing Graphics have been shelved for quite some time as they require extensive CPU power to keep up with the GPUs used in polygon rendering. In unison, the result is mindblowing and it begins to approach increasing realism. Ray-tracing is the simulation of lightpath lines, like you would draw in an introductory optics class. It’s used for non-real-time rendering, but like I mentioned before, the hardware gap is closing and should be crossed relatively soon.

With Brigade, they’re getting close. What would be brilliant is some type of integration into a streaming web player. Such that Hollywood quality renders are produced on the fly and sent to the reciever based on bandwidth similar to Onlive. It might not be responsive enough for fast paced gaming, but imagine instead interactive CGI “television” shows. Interact with the characters and drive their personal lives into a divergence of multiple realities.

Of course I feel like local hardware accelerated local content needs to be in the works. A computer five years from now should be allow content indistinguishable from a photograph to be rendered at better than 60 fps. Give Moore’s law a little more room and this might be done on your smart phone. At least that’s what I’m hoping, and I want this hardware acceleration to extend into browsers. So that a mesh of computing resources can stay humming on massive simulations, and then deliver the realistic scene directly to your web experience with minimal preloading. Just put on your AR glasses and immediately jump into any media experience. Which brings us to our next section!

 

Hardware Immersion

Occulus Rift

Firstly, condolensces and well wishes to the family of Andrew Scott Reisse. What a tragic and terrifying series of events to take place in Santa Ana. There is a lot to be frustrated about what happened, altercations like that should not happen at all, and car chases are very dangerous. Best wishes to friends and family.

Now on to the hardware, there’s a 98% chance you’ve already heard about Oculus if you’re reading this. If not, read about it anywhere. The real question is why it has taken this long to attain high fidelity VR technology. The answer is really Moore’s law again. We have super thin LED screens, lightweight accelerometers, and low latency screen updating. Plus, we live in the age of crowdsourcing and fast ROI for consumer demand. There is no longer a ceaseless struggle for investors, OEM agreements and broad-industry compliance. Business is streamlined to support fast-start organizations like this, and it’s only going to get faster.

WebGL already can easily support this technology, it’s perfect really. With VR and AR becoming a more accepted and proliferate technology, there will be an increasing demand for web-ready 3D experiences. I would not be surprised if virtual environments replace 2D websites, they are called “sites” after all. I can’t wait for public release!

 

Subutai Corporation

This is one of my favorite projects, and it’s happening in my own city too! Subutai is working on a project called Clang, which seeks to redesign the controller to work more like a sword than a gun. Too be honest, I never was very good with blades in the martial arts I did as a kid. My thing was always the Bo staff or a polearm.

Just take a look at a game controller. The button on top is actually called a “trigger,” which is ill-suited to simulating the many degrees of freedom your hand and wrist can utilize for single handed weaponry. By using a very sensitive accelerometer, Subutai will create a digital experience for swordplay gaming. Similar to a Wii remote, but amped up many times over, I can imagine it being very fun to play. The biggest challenge I might imagine would be collision feedback. For anybody who’s ever hammered a log splitter or used a sledge hammer, a solid impact can put stress all the way through your body.

Responsive feedback with that level of force is damn near impossible to simulate, but I’m sure if they get creative with gyroscopes and vibrating internal motors they might be able to approximate enough of the experience to make sword-gaming dynamic and engaging! I really hope I get to see these guys demo soon, maybe in person! Damned PAX conference sold out immediately. Every year I watch the mouth-breathers descend on Seattle, one of these years I’d love to go slobber alongside them. Maybe I’ll aim for the PAX dev conference, although I’m not so sure I want to shell out the $370 or take the time off work.

Just imagine a Clang controller in combination with Oculus and Kinect/LeapMotion! The modular combination of multiple peripherals will be what makes the future of gaming extraordinarily rewarding! It’s also what’s going to make the CM/OEM model obsolete, locked proprietary console systems will be too slow to keep up. To counteract this the patent trolls will come knocking, but let’s focus on one battle at a time. Nor do I want to bite the hand that feeds.

Emotiv and Mind over Mechanics

Now let’s imagine just how deep this rabbit hole can go. Brain control of in game avatars. Think I’m kidding? Nope, this is already real, we live in the future. Commercially, Emotiv is the top of the line. They offer headsets at either $299 for signal encrypted and $750 for raw EEG signal.

Mind over Mechanics is a project by a group at University of Minnesota lead by Dr. Bin He. Their novel approach allows a user to wear an EEG headset and control an AR drone remotely with their thoughts alone!

 

The Future Immersive Media Experience

Just imagine a future where visual immersion and mind-controlled avatars are the norm. There’s plenty of speculation you can do on /r/Futurology, but I’ll just say that in terms of what’s possible, the limits are unimaginable. There’s the Virtuix Omni which will let you walk on an omni-directional treadmill and can be combined with other sensors like the Kinect or accelerometers like the Wii-mote. The Oculus and Google Glasses may eventually even be displaced by cybernetic contact lenses.

Of course it is challenging to say what will catch on and what will be a fad, but the momentum forward and level of experimentation means that unique experiences will explode across our society. Content will need to scale up with platforms. So bigger, more engaging and more responsive digital experiences will start to surface. I like to picture ten years from now, a thought-to-text reader that works in collaboration with a Watson like artificial intelligence. You express vague programming concepts and it tries to convert them into usable classes and functions, then makes suggestions how it could all fit together or be indexed more efficiently. Suddenly you’re building the next gen immersive experience with thought alone, it’s like the movie Inception except computer driven. What this means for society will be completely dependent on how we can use these immersive simulations to solve real world socioeconomic problems and political corruption. Naturally, that leads us to the next discussion.

Societal Implications

What does it all mean? Why simulate the entire world? Let’s start by looking at what it means for “us” as a species. As discussed earlier with Zemerge, there are myriad benefits to representing human welfare in a digital format. It provides a testbed for solving problems without having to deal with the real negative consequences. The more real things get the better we can adjust the trajectory of our entire species! The real question is why shouldn’t we simulate the world? There’s nothing to lose, everything to gain.

There are obvious moral questions that arise when the simulations become self-aware. Do we create a digital heaven for everyone? Can playing God be a bad thing for some people’s mental health? Certainly there is something scary about the idea of a system knowing everything; but there is a large difference between a simulator that creates a random planet similar to Earth, and actually creating data inputs based on real world data.

It is scary that the NSA’s Utah Data Center has over a yottabyte of storage capacity to dissect that much personal data. The NSA claims “one of the biggest misconceptions about NSA is that we are unlawfully listening in on, or reading emails of, U.S. citizens. This is simply not the case.” This just means that they do read your email, but that it actually is legal, albeit unconstitutional. I just hope they are archiving the data in such a way that 100 years from now we’ll be able to do something productive with all that data, like reconstruct the personalities of the deceased based on their communications and web presence.

There’s also a lot of creative things that could be done with the data were it to be stripped of PII. With enough measures coming in from the real world, and with intelligent systems converting that data into something which can be modeled, great things could be made. The data could be used for something other than domestic spying or market manipulation. It could instead be used for increasing human welfare and minimizing social factors which lead to crime; but that would require an altruistic approach to money that I don’t think any aspect of politics wants to embrace right now.

What I’m more interested in is the long term sociological change and the simulation focused institutions and organizations that believe in holistic world modeling as a means to bring about a better planet. Groups like Demand Progress may eventually become excellent partners, using our tool to depict alternate futures as consequence of our actions today. With the immersive technologies I mention, this means literally jumping into your personal future.

A Facebook app could scan all of your friends and family’s faces and convert them into animated characters. Then simulate aging them. Pictures of your home could be guessed at with 3D floorplans. You could look at how politics shapes over the years, how pretty your significant other is at the age of 50, what sort of vehicle you drive, how much ecology is still alive.

As production technology becomes more abundant through 3D printers and automation, a normalization of wealth classes has the possibility to occur naturally, but could easily be stopped by protectionist laws as they currently exist today. I don’t forsee any great backlash against authority however; instead our generation will do what it knows best, ignore authority and create our own cliches that operate independently.

The only real problem with this is have to keep defining work with punch-cards and subjugation by a corporate model that resembles a crumbling feudal monarchy more than thriving democracy. While in the first world this just means dealing with the occasional horrible supervisor, but in the third world it’s a matter of life and death. The conflict of the twentieth century (and late nineteenth) was labor versus capital and labor sadly lost. Our century is the one of complexity and intelligence; labor will gradually become irrelevant and raw intelligence will overcome the authority of money. The true test becomes how exploitative versus compassionate that intelligence will be.

Insititutions

ICES Foundation

The International Centre for Earth Simulation is a organization dedicated to whole planet simulation. They are chasing exactly the same cross disciplinary simulations we wish to. They are founded as a Public-Private-Partnership, and have no defined boundary to the type of organizations they might interface with. In that regard, they might serve as a great collaborator on the MetaSim project if the correct capacity can be found. Their goals are similar to our own:

The mission of ICES is to integrate the vast pools of knowledge contained within today’s multitude of scientific and socio-economic specializations and to develop next generation ‘holistic’ modeling, simulation and visualizations that accurately depict the medium and long term future direction of planet Earth.

 

Whereas ICES wishes primarily to focus on the Earth, MetaSim will be a project decoupled from the geography we are so accustomed to. We want to model any Earth-like planet or even completely alien environments. Still, commitment to realism means rendering the Earth correctly. ICES has a great accumulation of resources for simulation information. They also have ample knowledge of ecological sustainability and related data for modeling climate change. I don’t know how they operate on the day-to-day, or how many full time people they have on staff, but am interested in their extended network and ultimate goals. I will have to reach out to them soon!

NASA Dynamic Earth

NASA’s Dynamic Earth is one of the more sophisticated climate modeling agencies, run by the NASA Scientific Visualization Studio along with several other collaborating partners. Together those partners produced a documentary, narrated by Liam Neeson and full of stunning vector graphics! Seriously, go to the film site and watch some of those vimeo clips! Here’s the original sample provided to the public:

JASSS

The Journal of Artificial Societies and Social Simulation is run by a consortium of universities, with the primary editor and hosting at the University of Surrey. This academic journal is one dedicated solely to the simulation of society and the application of use. Focuses on Agent-Based-Modeling as it applies to a sociological context, topics can cover anything that the humanities traditionally deemed separate from the realm of statistics.

There are no fancy video demonstrations to show, instead there are only the ever-changing most viewed articles section. So instead, here are a few of the articles I find most insteresting:

SimulPast

SimulPast is a Spanish collaboration across many universities to model human behavior, particularly in the early Neolithic. There is relevance across all branches of science and some models do attempt to capture present day sociology into the models. There is a preference to engage with questions surrounding social dilemmas and subject based models instead of relying on computational methods. This may just be a lack of trained individuals engaging in the way they need. Their program seems more similar to a routine economic geography or political economy program, but with the ambition to become something more. Without knowing Spanish I can’t fully speculate that this is true. The one computational model I do find compelling however, is their modeling of the spread of agriculture across Europe.

PALANTIR

Selling their services to major government agencies, Palantir has perhaps the most impressive (and frightening) array of tools for data monitoring and live agent simulation. The use cases are enumerable, but the price tag on licensing means that this is definately a tool for the “establishment” to control the masses. Founded by Peter Thiel, this software demonstrated more capability than what the US army was using. The army is actually hesitant to admit defeat since a whole grid of contractors building the DCGS-A can’t compete with a single startup.

In terms of application to the simulation project we have in mind, this is essentially not the use case we are aiming for. While a contingent dossier type platform might serve the CIA pretty well, there’s really no reason for MetaSim to model intelligence to the level Palantir provides. This technology is great for tracking millions of people, and utilizes active field data to become useful. While MetaSim is not against using large live data sets, that intention is not our primary goal. We just want to stochastically model the probabilistic likelihood of certain events.

 

Moving Forward from Here

Who will join us? Who can collaborate?

So we are looking for collaborators and contributors. With open source as an endgoal, we want you to realize your creative dreams. If done together and on a web delivery system, the reach of the product would be undeniable! So don’t abandon your projects, but consider what a simulation coalition with a joint codebase would mean… It would be modular, with experts working on their respective sub-components, and it would be evolutionary such that superior forks and branches would win out over the competition. For now, there’s our Github organization, feel free to join at any time, introduce yourself on the reddit, the G+ page, or help us develop our new domain and drive participation in our forum!

Who is going to pay for all of this?

This is the project we all want to do, but nobody can float the tab to make it happen. Should we do it all for free? Work 40-50 hours for our day job then put an additional 20-40 into this open source project? It’s difficult to make progress with limited concentration time every night.

The startup accelerators offer modest sums, just enough for 1-2 people to pay rent for a few months and turn over the barebones of a project, then they own most of your work. Accelerators are looking for fast payoff ROI, and award money to redundant web applications hoping one might have just enough features to beat the rest or fit a niche market. Nor are accelerators necessarily enamored with the open source community.

We could just crowd source each other’s projects in a perpetual flux of compensation and reimbursment, but ultimately a crowd sources finance ecosystem needs to be fed dollars from somewhere. Since we are motivated by a Robin-Hood style business model of distributing profit back to our consumers, I doubt the investors will come flocking. I’m not even sure if tax or contract law would permit what we have in mind, but ideally it would look something like Youtube’s partner program.

So should we wait on the government to cherry pick which specific people should get science grants that might vaguely apply to somebody working on MetaSim? Do we bait tech giants like Google or Microsoft, hoping some experimental aspect of our work might benefit their bottom line or ability to shell out consumer data? Perhaps we experiment with cryptocurrency, either using Bitcoin directly or else inventing our own which can be pegged to the utility of a particular simulation.

These are the questions that need to be figured out, they keep me awake at night. How do we run a non-profit that keeps the codebase open source and encourages worldwide collaboration while having a commercial division that draws money in and then reinvests it to contributors?

Enabling the Prosumer in all of us

There should be a way to pay people who become more involved to a project if it actually in turn drives revenue, some sort of micro-contract ecosystem based on automated object tagging. In this system, all contributions are drafted as a percentage of a net whole, and an open market for user-created games and experiences drive in revenue. Here I naively speculate what this might look like, so take it with a grain of salt as a first draft approach!

If John Doe uses the MetaSim engine to create a game or media experience, he picks all of the modules which he’d like to include to run that experience. When he sells the game, there’s some consumer markup function that gives some meaningful wage to the experience creator, but then the embedded micro-contracts are tagged in some Mongo-style collection such that every prerequisite contributor gets awarded. In this way, a flow of income is based directly from the volume of use or some voter system for all code, assets, music, art and game concepts used in a particular media delivery. In this way, the number of “Likes” or “upvotes” of a certain asset becomes it’s direct worth.

Did any of you ever play Tony Hawk’s pro skater? Do you remember the skate park editor? Let’s use that as an example. Some person or team built the game engine, which is paramount. Then some 3D artist created each of the skate parts you get to use to build stuff, and each object has material made by a texture artist. There are animations for each trick, customized skaters down to the color of your pants. Everybody who made the game and the editor gets paid in a fixed way.

Let’s imagine now that there is a vibrant modders community, and a set of developers working to continually update the version of the engine. The engine code might be 10% the same over a ten year drift. The models used have been updated by multiple artists, and lots of new content added over time. Everything is open source and free to incorporate into a fork of the game. You can pick and choose what primary themes and components to use. Let’s say you borrow about 80% of your content including the engine itself, but you invent 20% new content as assets and gameplay mechanics. You post your creation to a marketplace as something halfway between “DLC” and a game sequel. Let’s call this a “gamisode.”

You are asking $9.99 from new players, but if someone has already bought the original fork, their price will get reduced accordingly. If they developed any of that previous content, they get an additional contributor discount. You elected to have a multiplayer component so the server hosting will cost somewhat, but what happens after that would be magic. You take 20% of the revenue and pocket it, the remaining 80% gets split according to an aggregation distribution algorithm that weights how important or well liked the original codebase and game assets are. The game engine might make up 15%, and then remember the developers on the original? They would get 0.15 x 0.10 = 1.5%

Now, how did the initial developer get access to that 80% of the content to build his gamisode? He “borrows” it without interest. We live in the era of abundance, and need to shed any false beliefs of asset scarcity. Most game engines today charge a lot of money to buy models or assets to plugin to your game. You can pay for them with a business loan or investor money, or just make all of your own assets. This is an artificial barrier to artful creation and it only hurts the engines themselves. In a system where all content is open source but has some type of encrypted coder/artist stamp commit system built in… you don’t have to worry about thievery. If you don’t want people to use your content in a certain way, bake that into the license, but if you don’t buy into the open source platform, someone else will and your content will be out-competed.

Suddenly you have a system that provides passive income over generations of software, naturally producing an inverse logistics function over time. It provides a consistent base income for those initial developers to experiment on newer and bigger things, it is meritocratic and completely favors the producer. Intelligent middleware like I am describing, if done correctly, could transform the business of producing arts into merely the art itself.

Moreover, for the case of the hard sciences, consistent funding is a problem. Since an accurate simulation relies on the sciences for design, they must be treated as part of the economic ecosystem. If a cost allocation system were to compensate specializations for their modeling methods, it would alleviate pressure on the government to make science based decisions on funding priority. For example, every time planetary nebulae or white dwarf stars are simulated in a space simulator in our framework, a percentage of cost flows to a fund dedicated to that particular specialization.

There even already exist institutions to help disseminate the flow of income appropriately. I use the case of astronomy since I’m more familiar, where the AAS or IAU might incorporate the additional income into their existing grant structure. However, I would discourage the involvement of traditional pay-wall journal business models. The Open Access project has a few suggestions on that, but if instituted correctly, a gaming driven income model for the sciences would absolutely change the face of our society. Halo 4 grossed $220 Million in one day. That’s more than some entire fields of study are awarded in a year by the government. I’m just suggesting that if scifi gaming and films gave earnings to hard science and engineering, or if reality TV gave some manageable portion of net income to sociology and psychology, our civilization would be considerably better off.

Of course this is all speculative and in practice it is likely very difficult to implement or enforce against redistribution hacks. Still, I hope I have the wheels turning for you. Elective, decentralized wealth redistribution with complete transparency could be one way to fund basic research, experimental code projects, and an entirely new type of startup company. Emerging digital markets need to pay more money to the content producers, otherwise we will just continue to inflate a consumption bubble with less than optimal progress and empty wages. This can hurt everyone in the long run and alternatives need to be considered!

Concluding Notes

Will MetaSim and projects like it steer the way, or will big business gaming steamroll us, patent trolls annihilate us, or market forces steer towards centralized platforms only? I’m really hoping our distributed dream becomes contagious, but if not, let’s hope our experiments make some small influence on the direction of the giants.

Truly, I am not concerned about our own failure, what I am concerned about is getting humanity’s collective focus on the right track. We need to be able to model the world accurately and holistically like in Skazinski’s Zemerge. It does not matter how that gets done, so long as those tools become freely available to everyone instead of just a privileged minority of those who can afford it.

I have the dream of turning such simulated worlds into a vast partner platform that pays contributors systematically. We can fix issues in the global economy by providing a distributed digital P2P service space, and transition from a consumer economy to a balanced production to consumption system.

So long as our MetaSim project behaves more like a resistance movement than a traditional company, progress will stay low cost but the payoff enormous. We have a few brilliant contributors already, but we need more, from every level of experience. Come to us with a willingness to give and you will learn as we go. Contribute and share your knowledge, and the world will become a better place for it. ¡Viva la simulación!



http://www.iontom.com/2013/06/08/sosu-2013-p2/ http://www.iontom.com/wp-content/uploads/2013/05/multivac-300x246.jpg Blog, Featured Home, Front Page, Gaming, Portfolio, Project, RPG, rSimulate, RTS, Science, Simulation, Simulations, Slider, ABM, ai, ascape, atoms, automated agent, civilization, cognition, encog, evolution, extrasolar, flame, game engine, gaming, geospatial, hexagons, immersion, indie, infinite pixels, institutions, JASSS, limit theory, map reduce, metasim, mind control, NASA, netlogo, neural networks, occulus rift, opencog, Palantir, peripherals, platform, procedural, procworld, proland, pyschohistory, ray-tracing, repast, rest protocol, rpg, rsimulate, science, simulation, Simulpast, society, space games, species, spore, starforge, terrain, thrive, universe projects, upvoid, voxels, zemerge