Tuesday, April 1, 2014

Reflecting on 2-years with iontom.com!

Firstly, April 4th marks the anniversary of the second year I’ve been running this blog. It’s not much of an accomplishment, but it’s fun to look back. Sadly, a few months ago I had to move out of the house in Ballard, and was forced to abandon my awesome garden. Although truthfully, long hours at work, neglectful weekends, a nightshade incursion, and an army of insects got the better of me. And of course my trip to the Great Sand Dunes National Monument to hike… Was exhausting and exhilerating at the same time, it felt like visiting Mars!

Pushing to Create a Simulation Social Platform

Now, if you’ve read any of my previous work, I described back in 2012 my overview of how I would build the “perfect strategy game.” Or at least I described what the game mechanisms might look like. The “how” is actually a much more complex scenario, as I discovered in getting the /r/Simulate projects started. Aaron Santos created the MetaSim web API for use in our Github group, where the “WebHexPlanet” was our most impressive repository hosting the backbone of the API framework. (Inspired by the HexPlanet demo by Vicki Joel) Unfortunately, Heroku started charging $30 monthly to keep our demo running, so I had to pull it down for the time being while I find a cheaper hosting option.

After several months in early 2013 trying to keep up with the MetaSim project, I stopped trying to compensate for lack of webdev skills. I focused instead in characterizing the need to have a holistic simulation architecture. I pushed so deep into this research, that I had to split my post into two segments, the “State of the Simulated Universe” Part I and Part II. I really should refactor those posts into some type of PDF version with images instead of videos.

I really tried to encapsulate all the relevant technologies, which is a huge volume of subjects to try and fit into one post. To compensate for this, I experimented with creating a second website, rSimulate.com, which I might try to revive at a later time. Something about the theme and the format of the posts just doesn’t vibe right. I think if I can redesign it and get a more exciting top level domain for it, I could potentially make something with it. The goal was to make it more of a syndicated new source for all sorts of related projects in the simulation space.  Essentially a hobby/niche geek site like Ars Technica or Gawker Media, but focused on all things simulation.

Efforts Toward a Gaming Experience to Teach Science & History

In the summer of 2013, I spent a few weeks converting my old gaming rig into an Ubuntu server and then managing the development environment for my friends in the rSimulate group. We cloned the Asterank project in the hopes of integrating elements of it to function with WebGL planet viewer we had already started with WebHexPlanet. Of course this would require either migration of the native Python in Asterank into the pure JavaScript of WebHexPlanet.

Asteroid mining phase with space infrastructure as the 4X core mechanic. A civilization style game for the browser but with enough novel game mechanics involved with multiplanet colonization and a unique turn sequence system.

The goal was to create some type of caveman to space colonization experience, like depicted in the Caveman2Cosmos mod for Civ4, but based on a new distributed web platform. The problem with C2C is mainly that the Civ4 engine isn’t multi-threaded and the turn system forces you to wait for a complete cycle before you can make strategic decisions. Especially in the late game, the slowdown time for turns is such that you might be waiting 3-5 minutes for a turn cycle to resolve. Also, the content overload almost detracts from the believability of the experience, and the engine itself is very poor at modeling cultural organizations prior to the invention of nationalism. Soon I will post a more comprehensive assessment of how to model societies for games in a more Guns, Germs & Steel approach.

However, while working on the Asterank modification, I came up with this concept for smaller title. It would focus merely on the the space colonization and building automated infrastructure with a distributed fuel network. It could utilize orbital planning like that depicted in Kerbal Space Program, but have an easier interface for planning automated trajectories. Rather than requiring expert piloting by the player, the system would instead create chooseable destinations based on a series of calculations involving the required energy, either potential or kinetic. You could then create game phase paradigms as follows:

  • Detection Phase: Finding Near Earth Asteroids using the same methods currently in use with sources such as the SDSS. We would replicate this search using simulated images from Arkyd-class space telescopes. Players create detection rules using basic shapes and a small logical UI.
  • Near Earth Capture Phase: This would simulate launch from the Earth of craft capable of rendezvous with near earth objects (NEOs) or the moon. Players would either start building lunar launch infrastructure, or else attempt to catch up with NEOs with probes to study or convert into liquid hydrogen fuel if possible.
  • Infrastructure Production Phase: This phase would be after establishing a moon base, rerouting NEOs into lunar orbit, and possibly constructing a lunar space elevator. This phase would be the longest of the game, it would revolve around the transition from Earth’s gravity well into that of the sun. Building infrastructure across the solar system, colonizing Mars, Ceres, and creating large habitats in space, as well as a fuel transportation network.
  • Victory Conditions: Players would continue to work on building infrastructure.
    • Science Win: Exploration of new places and experiments to detect life would increase science points. Hitting a certain threshold of points wins.
    • Colonization Win: Put as many people as you can into space colonies. First to hit some milestone, 10k or 100k, wins.
    • Interstellar Mission: One approach to win could be to harvest anti-matter from the Van Allen radiation belt or other belts near planets with a magnetic core. Mainly Jupiter. After acquiring enough fuel and building an “Ark” ship, the player embarks to Alpha Centauri or some exoplanet. Victory occurs from either an unmanned ship reaching Alpha Centauri or a colony ship of 1000 people leaves heliopause.
Now, I only spent a few weeks on this last summer before dire financial circumstances forced me to abandon everything. I never published outside of Reddit, I was too vested in figuring out the technology, and hadn’t shared the design concepts which I only had on a sketchpad. My recent employment situation might give me the ability to pursue this project. However, by posting the preliminary concept on here, I’m hoping to hedge the bet that someone might create this project at some time, and at least give a nod to me after the fact. The important thing is that eventually, a game like this exists to teach the orbital mechanics and space science in a creative way. Since the platform would be WebGL, such a project would have the reach of a social-casual game but gameplay more compelling than some desktop titles.

Fighting for a Technical Solution to Structural Poverty

Then, for Labor Day 2013, I wrote a statement called the Nucleus Proposal, which was an extension of the sentiments being debated in /r/Futurology, revolving around the looming specter of technological unemployment and the case for a basic income to supplement the transition to a labor-free society. Unconditional Basic Income (UBI) has been mentioned in that sub-reddit for about a year-to-date. Many people are proponents, some feel that the concept is over-discussed and is non-pertinent to the topic of Futurology. My stance is that if it’s a predictive post it is Futurology, if it merely describes UBI. So an if-then statement is futurology, as is any mention of a future date explicitly.
I of course support the notion, but am potentially biased. I have a desire to create educational simulations, but there is no interest in financially supporting the projects in question because the value in the technology behind them would be open source and not directly monetizable. Our society is focused on commercializing every object and interaction possible, not in promoting free information. Aaron Swartz and Edward Snowden are martyrs instead of heroes.
Aaron_Swartz
Being pro-establishment means you support moneyed interest in a fixed finance economy instead, rather than supporting the unpatriotic notion that ideas should be as free as the people that possess them. Cracked humorously declared this path into the future to be known as Forced Artificial Scarcity (FArtS). It’s supposed to be funny, but really it’s just sad. Just like the consensus among congress that climate change isn’t real. I sincerely hope that Neil deGrasse Tyson can evangelize trust in the process of science.
Now if you want to know what Nucleus was supposed to be, you should just read the proposal. The short overview however, is that it would be a mechanism for individuals to quickly find opportunities to contribute to projects, either open source or proprietary. A built in attribution model would allow for a marginal revenue percentage to be gifted to open source coders and artists with the most commits and the highest social karma.
The goal of the project was to envision a digital mechanism to continue technological progress after the labor market dies off from automation, and traditional wage-economics can no longer support public at large. The hope was to build an actor system like Palantir, but self-directed instead of built for mass surveillance. Plus it would be geared towards allowing people to self-employ at a massive scale and integrate new projects into the  broader utility of an iterative self improving simulation.
Syndicated status page focused on socializing project participation. Nucleus apps for collaborative authoring. Nucleus actor based project system and social collaboration tool

Of course, the project failed to start at all. I learned the hard way that too broad a scope, and too few volunteers, with no incentive to complete the project… Leads nowhere. The frustration was that we wanted to create something for a post-capital society, but didn’t have the capital or spare time to pursue the project on our own. Like it or not, our society will be a slave to money until the robot army of the bourgeoisie mows us all down with machine gun fire.

The Nucleus idea itself was created in September of 2013, at time during which browser applications were just starting to flourish across the web and responsive designs were just picking up. Now just 7 months later, Nucleus is just another small-minded idea to create a project management mega-app. Plus, now it seems like half the web is using responsive content and a richly authored content tool chain, it’s impossible to keep up. Our backend goal was that if we created common data definitions within a networked of federated API’s, you might have services which communicate to each other without a centralized application server.

You could then launch Nucleus nodes to manage individual projects, but if you choose to publish your work, you could have a visual representation of all the interacting projects that make up the web. I’m not sure how we’d visualize this, maybe a “map of the internet” that constantly updates. It might have functioned as an open source standard for communication business objects, contracted individuals, automated services, and other pre-defined web agents. The concept is what is referred to as the Distributed Autonomous Corporation. You don’t need management, you just need good ideas and willing people to manage the infrastructure and an avenue of automated income to anybody who contributes. It could be code, content, or marketing on social sites, any action with impact would generate a cascade of value distribution… Decentralized distribution based on rules of automata.

DACS

The Hidden Promise in Cryptocurrency

What I didn’t see coming until October was the rise and fall of Bitcoin and an explosion of alternate cryptocurrencies. However, I did posit in the Nucleus Proposal that a cryptocurrency, engineered to have the right parameters for determining supply and distribution, could function as a wealth optimization mechanism for increasing wealth based on democratically decided values (like energy efficiency) rather than the cryptic decisions by the board members of the Federal Reserve Bank. However, I’m not an economist, I won’t try to engineer a model that works, I just wanted to break the ice that an open source algorithmic approach might benefit parts of society that are currently overlooked.
Dogecoin_logo_large_verge_medium_landscape

Then, in December, something crazy happened. A Developer in Portland, OR and a marketeer from Sydney got together and created Dogecoin, a currency based on a meme. I made the joke, “Welcome to Meme-nomics, the Shibularity is Near!” Which was correct, Dogecoin exploded into the world, its value shot through the roof, long before the block chain reward halving even occurred. Which lead me to consider the idea, that a reputation economy could exist if every person had a cryptocurrency associated to their name.

It’s a very anarcho-capitalist idea, just let everyone have a currency, then convince other people to mine or buy your currency. The socialist-mutualist part is where the person being mined gets half of the reward of every block mined. It’s almost like a share in someone’s reputation, if you think someone has a lucrative career ahead of them, buy early, that person gets financial support in their early career when they need it most, and then later on you can sell that person’s reputation points for more than you initially put in. It’s not a micro-loan, it’s micro-investment for individuals with no stakes for person involved other than create cool things and find an audience to support them.
Anyway, that got shot down by my friend who knows the ins and outs of the SEC. Again though, I was trying to speculate on a way to create a form of labor-agnostic income that would benefit people just working on their own projects with a social support network behind them. However, I see projects springing up like Ethereum, Mastercoin, and Auroracoin, and I am hopeful that either a crypto-currency or a block-chain API stands a chance at patching the holes our congressional representatives are tearing into the fabric of our social support infrastructure.

A Modest Proposal

Now, rather than allowing all of our human capital to just rot away in their parents’ basements, or squander the next 20 years working entry level corporate service jobs without benefits… We could start a human rental service. To qualify, you must be under 40, capable of turning off any cognitive reasoning skills, and performing meaningless tasks bid to you by the angry baby-boomers who managed to acquire enough wealth to purchase a second home or some other large line of credit.

To participate in this venture, young people should report to the nearest medical processing center and forfeit any remaining financial assets to pay for the procedure. The process will be quick and simple. A device, inserted into the back of the skull, will connect to all the essential nerve bundles at the root of the brain stem. Now, rather than to use this tool to create a new universe of socially liberating new art-forms, instead it will allow for an unparalleled commercialization service. Body borrowing! Now, the sagging forms of top level executives will need to sag no longer!

For simple hourly fee, a sexually attractive body may be rented and worn about for the day. The youthful generation wasn’t doing anything important anyway! Now, what should we do with all of the bodies which aren’t currently employed by an approved funding agency or corporate municipality? Well let’s just keep them in a warehouse! We can administer a slow drip feed supply optimized to keep the bodies well nourished, and then have a state of the art exercise and production facility, where idle bodies can be put to work manufacturing merchandise to sell back to the network of the reasonable employed. It will be amazing! Think of the revenue! No one will be unemployed!

ludovico-technique

Alright, that was my terrible April Fools gamble. Hope it wasn’t too terrible! Thanks for reading!



http://ift.tt/1fpay4e http://ift.tt/1pLJe1K Blog, Futurology, Life, MetaSim, Nucleus, Project, Projects, Simulation, Strategy, 2-years, aprilfools, blogs, brains, btc, civilization, crypto, DAC, doge, futurology, history, much wow, nucleus, reddit, rsimulate, simulation, space, UBI

Saturday, March 29, 2014

#sarahpascillas #cafeforza


from Instagram: http://ift.tt/1fARoFf


http://ift.tt/1k9eFBu http://ift.tt/1dFS4Oh Instagram, #sarahpascillas #cafeforza

from Instagram: http://ift.tt/1mhfdqn


http://ift.tt/O7L00Q http://ift.tt/QsYKp3 Instagram

Tuesday, March 25, 2014

I Joined the Discussion for the Futurology Podcast

The Futurology Podcast #13

I joined fellow /r/Futurology mods Alex and Jason to talk about the top 5 reddit posts and describe the Future Day event we held for the Seattle Futurist meetup. For those of you following nuclear fusion, I tried to recap the work of Derek Sutherland.

As far as the top links go, here they are in order. We took out a story about UBI (universal basic income) due to the fact that we’d like to have a full debate at a later date. We need one volunteer to represent the pros of UBI and one who is against it. On the next podcast we’ll hopefully get some solid points from each side.

Top 5 Futurology Posts (excluding weekly science image)

  1.  [1823] – This Woman Invented a Way to Run 30 Lab Tests on Only One Drop of Bloodaeo1003
  2.  [1448] – Mind-controlled robotic suit exoskeleton will allow a paralyzed teen to kick off the first ball in this year’s FIFA World Cup in Brazil_trendspotter
  3.  [1269] – Michio Kaku blew everyone’s minds on the Daily Show last nightcreativeembassy
  4.  [1142] – Power-Generating Nanoribbons Implanted on the Heartbob_toe_nail
  5.  [961] – Clerics Issue Fatwa: Muslims Can’t Live On MarsSimcurious

Listen on iTunes, Podbean, or watch on YouTube. HD YouTube version here.

Follow us on Twitter:

 

 

Please let us know if you would like to debate someone via Skype over universal basic income.



http://ift.tt/1fgzpnA http://ift.tt/eA8V8J Alternative Energy, Futurology, Podcast

Saturday, March 22, 2014

The Approaching WebGL Arms Race

webgl1

The biggest news coming out of the Game Developers Conference (GDC) in San Francisco might be about the next gen Occulus Rift Dev kit or the new Sony “Project Morpheus.” However, the true sneaker surprise is currently nothing more than a footnote in the announcement of the Unity 5 engine. That small but vitally important announcement was the new partnership between Firefox and Unity in creating a plugin free browser experience that uses Unity as the content controller. This is not the only partnership between Firefox and a game engine, Unreal 4 has also been migrated to the browser.


 

The technology behind this is called ASM.JS, which is a Javascript library with compiler level bindings which allows for C++ level programming. This is a much more powerful way to handle 64-bit data types from the browser by offering a browser-based “LLVM” According to Steven Wittens’ blog, the fundamentals of any great browser driven accelerated graphics service is to have the simplest code possible to handle more data. It’s a simple concept. Less code to compile means more resource available to use on the data, applied to WebGL this means that you don’t create everything in JavaScript. You link Javascript to a deeper language with remote calls use code generators.

Per Atwood’s law, it was inevitable that someone decided the back-end should be JavaScript. Thus was born emscripten, turning C into JS—or indeed, anything native into JS. Because the output is tailored to how JS VMs work, this already gets you pretty far. The trick is that native code manages its own memory, creating a stack and heap. Hence you can output JS that just manipulates pre-allocated typed arrays as much as possible, and minimizes use of the garbage collector. -Steven Wittens

People often miss the implications of a stronger browser integration for AAA level browser content. However, when put into the context of next-gen immersive technologies, you need to consider what types of interactions users will want to experience as they browse the web. Will a static 2D page remain the standard in the future where augmented and virtual reality devices permeate the interaction space? I suspect that 2D browsing won’t vanish, but that 3D web experiences will feel more natural to people from an HCI perspective, for applications where word processing isn’t vitally important.


Firebox VR offers one example of what 3D web browsers might look like. Eventually content will have device detection such that if you visit a site with a VR device. Sites will create scenes which are believable, easy to navigate, and integrated with common 2D content formats. There are a few groups actively preparing for a WebGL driven internet. The most pronounced might be MontageJS, an open source repository maintained by the larger Montage Studio company, which will provide the tool-chain and authoring system for their open interactive site experiences.

Eventually I would like to migrate my site to a host that supports NodeJS instead of Apache+Wordpress. That way I can start demonstrating the interactive web on my site itself. For now though, check out Montage, they allow for Functional Reactive Binding between interactive javascript and HTML5 dom elements. It’s powerful stuff that makes both 2D/UI and 3D/scene pieces work as reusable code.

Creating a heavy client (local content) WebGL system might sound counter-intuitive in this apex age of the “cloud.” However, utilizing local resources rather than a streaming service such as Onlive or nVidia Grid is actually starting to make sense. We live in a world where Moore’s law continues for chip design and parallelism on the GPU, but non-commerical bandwidth provided by ISPs has remained stagnant for the last 5 years. Recently though,  monolithic Comcast has annexed Time Warner, and Verizon has effectively killed net neutrality.

These big self serving monopolies no longer need to innovate on residential speeds, they can instead focus their resourced now on the important business task of killing off all content competitors who are reliant on their services. (Netflix, GoogleTV, Amazon Prime, P2P, maybe all WebRTC.) We are headed into a “cloud-service” dark age. Low bandwidth content won’t suffer, but video and web-gaming is being forced into an arena where either it pays up or it won’t work. This might lift a decade from now if Google fiber or gigabit WiMax/LTE appears for reasonable price… But otherwise, we should settle in because winter is coming.

neds-head

So why even go for browser 3D in the first place? Browser driven software is OS agnostic, and W3C supportive browsers will all eventually share the same core capabilities. Browser 3D isn’t anything truly new, but web stack agnosticism through HTM5 and Javascript has only been around for a few years. Which means that plugin free WebGL has finally been able to surface. Before that, Adobe Flash was the single proprietary interactive language of choice for over a decade but the mobile space shattered that dream completely. Lack of support for mobile OS and less powerful smartphone hardware meant that alternatives needed to be explored. Whereas we might have expected a competitor to jump in and fill this void, by some miraculous means, the torch was picked up by the open source community.

Google Chrome Experiments and Firefox helped to get some of these initiatives started, but truly, the advent of git & mercurial, and the generosity of superbrains like “Mr. Doob” helped to shape the popular ThreeJS engine. However, despite the “Awesome Factor” of these exciting new technologies, none has truly emerged yet which has the full capabilities of a modern game engine. Unity had its plugin based web player, but it wasn’t the same render environment as the primary engine itself. This new iteration of Unity looks to be essentially the full engine, maybe with lower poly-count.

Here’s the most popular of the WebGL experiments to date:

  • ThreeJS: The favorite of most WebGL devs everywhere. It’s free, open source, well documented, allows for lower language shader integration. The /r/Simulate team used it for our WebHexPlanet app. Everyone loves ThreeJS! COLLADA to JSON exists for loading models but animations are still challenging.
  • BabylonJS: Originally created as  Microsoft Codeplex project, but eventually released under Apache 2.0 license. It handles very similarly to ThreeJS in terms of scene library calls and animation. It’s not been around as long as Three though, so it has less extensions at the moment.
  • Goo Engine: Proprietary software, but has a lot of animation focused libraries. The idea is that Goo would like to be interaction focused instead of scene focused. I imagine only time will tell.
  • SceneJS: The implementation of the SceneJS API includes a scene graph engine, which uses JSON to create and manipulate nodes in the graph. This is similar to the architecture designed by Aaron for MetaSim, which worked on top of ThreeJS.
  • Virtual World Framework: The VWF was founded originally with DoD money, but is now open under Apache license. VWF utilizes NodeJS with web sockets for a messaging layer. There is also an impressive Virtual Sandbox with the inclusion of authoring tools and instance storage. This project is very powerful and probably the most underlooked for its capabilities.
  • More are listed at WebGL-Game-Engines.com
Recursive components in Virtual World Framework Boundaries of functional value for VWF elements Scene-graph API layering as defined by SceneJS

Again, Unity5 for the web and Unreal4 are not the same kind  of WebGL (it is still WebGL strictly speaking). Instead, they use the ASM.JS compiled engines which will have all of the scene scripting done in languages other than Javascript. Which improves performance but makes it more challenging for the development of user-specific web delivered content. Unity5 won’t be defining HTML DOM elements like MontageJS does. Javascript-Driven WebGL (maybe call it Web-JSGL) will offer easily reusable web parts which can be edited directly in HTML+JS, versus the LLVM approach (call it Web-LLGL?). Maybe there already exist monikers for these two different styles of WebGL, but I think it is important that the similarities and differences be noted. The “LLGL” approach absolutely will be better at defining very complex scenes and scenarios, something that the “JSGL” will not be able to keep up with. This might not always be true, WebCL could change the circumstances, but for the near future, these similiar but distinct variations of WebGL will come to fulfill very different use cases.

However, this has all been the how and the what, but I need to elaborate on the why. WebGL is important because of the next generation web discussed earlier. Augmented and Virtual Reality hardware is starting to proliferate among consumer devices, such as the newest clash between Sony and Occulus. Whereas the decade of the naught years was focused on the hardware wars, this decade will begin to focus on the peripheral wars. Visual immersion (Occulus), tactile sensing (touch screens), and full body motion (Kinnect) have already become part of the entertainment experience. These technologies are only going to improve as new types of devices enter every 6-18 months. The mobile market is sluggishly toying with Google glass, but once contact lens AR is fully commercialized, it will be difficult for the public to resist the utility of full AR immersion.

DARPA sponsored project by iOptiks

DARPA sponsored project by iOptiks

Which leads us back to WebGL. Once we have undergone what Kurzweil defines as the transition from mobility to ubiquity, the web will not be something that just exists on a pocket device, it will be everywhere. Our world is 3D, so we will need to have fast-deploy web standards which operate in a 3D space. Building this infrastructure on the existing technology of the web will mean that augmented locations can be visited as easily as a web page is today. It will feel more intuitive than reading a four inch screen, and may very well become the most common method of human interaction. Certainly screen resolution has been trending upward much faster recently than it ever has before.

640px-Digital_video_resolutions_(VCD_to_4K).svg

Imagine Skype/Facetime on steroids, cameras and lidar pick up the room around you, generate a 3D model of your friends, and then display them as if they were there with interpolation for smooth animation. All the current signs indicate that this will evolve from web standards, not some other universally compiled set of rendering standards. It will probably even utilize HTTP for asset streaming and latency-agnostic communications.

If the internet fully invades reality, it will need WebGL. A lot of people are excited by Javascript driven WebGL, but the number of participants still pales in comparison to the crowd participating with the Unity tool-chain. These recent developments may either close that gap or obsolete it entirely. Let’s hope for the best!



http://ift.tt/Q2FIFT http://ift.tt/Q2FIFL MetaSim, Simulation, Technology, WebGL, augmented reality, game engine, graphics, html, javascript, scene graph, simulation, technology, threejs, virtual, VR, web, webgl

#uwcherries #goodmemories


from Instagram: http://ift.tt/1jnfKFe


http://ift.tt/Q21oSr http://ift.tt/1jnk2N3 Instagram, #uwcherries #goodmemories

Friday, March 21, 2014

#portals #newperspectives


from Instagram: http://ift.tt/1jca1q8


http://ift.tt/OI08CI http://ift.tt/1d8kTTf Uncategorized