Monday, May 26, 2014

#babyducks #sammamish


from Instagram: http://ift.tt/1tHmmUL


http://ift.tt/1mhAGia http://ift.tt/1kFL9pg Instagram, #babyducks #sammamish

Sunday, May 25, 2014

Fun times at #cafemox playing #munchikin


from Instagram: http://ift.tt/1mpTSvS


http://ift.tt/SCOn2P http://ift.tt/SCOnj3 Instagram, Fun times at #cafemox playing #munchikin

Wednesday, May 21, 2014

The apartment #swamp


from Instagram: http://ift.tt/1tmRXuI


http://ift.tt/1jFeh13 http://ift.tt/RbuqhW Instagram, The apartment #swamp

Tuesday, May 20, 2014

GOJIRA!!! Giant Monsters of Childhood

The 2014 Gareth Edwards Godzilla film recently came out, and I have to say I walked away satisfied. Where I was expecting a much darker tale of wanton nuclear destruction, the story itself fit into the niche role of being the fun classic monster vs monster archetype. If sequels are made, Edwards might consider a remake of the 1954 film actually set in 1954 Japan. He might also consider revitalizing a few classic Toho monsters. The MUTOs were awesome but could you imagine a redesigned modern-CGI King Ghidorah? That would be terrifying!

king-ghidorah

For those among you who don’t know me, I’ve been a Godzilla fan since the early days (around maybe 4 years old). Godzilla always represented to me the indestructible sense of self. Others will try to tear you down with little insults (missiles), and opponents will try to get in your way (monsters), but in the end, self-perseverance and a powerful inner strength let you dominate and win whatever challenges you might face. That’s how the gigantic fire-breathing dinosaur become an iconoclast hero instead of staying a terrific villain.

Godzilla fandom also taught me about the web. At 13 years old I started this website still hosted at tripod.com. I’m amazed it’s still up! I later built another site on Topcities, directly editing the HTML, but all that I can find left is this Wayback machine version. Which, back in 2003, a lot of sites used browser frames… so you don’t even get the full experience.

More than hosting information, I also used this as a media platform. After writing elaborate fiction in a fan-forum, I would illustrate my stories and those of friends who submitted content. I was big into illustration, I would draw monsters that people submitted under “create-a-kaiju,” as well as draw my own comic books and cover art. After using micron pens to ink a scene, I would scan my pictures and color them using Adobe Photoshop 6.0 (old school).

GvsGorgo
Then high school taught me the negligible social worth of having a fandom. I let go of Godzilla, and after high school, I let go of art. I let the monsters defeat me. I’ve since struggled to find a creative outlet to satisfy that itch, and I’ve been mildly successful. The true art of the 21st century is interactive media, which hopefully my team can help create with Asteroid.Ventures. Who knows, maybe I’ll even make a Kaiju game one day!

And lastly, if you’re interested, I’m selling some Godzilla stuff on ebay. I’m trying to reduce the amount of stuff in my life, which means digitizing as much media as I possibly can and discarding what’s left.

I’m really going to miss these babies! 75% of my childhood right here:

IMG_0936



http://ift.tt/1mT4GDA http://ift.tt/1nklBxk Fandom, Life, 2014, childhood, fandom, film, ghidorah, godzilla, html, kaiju, monsters, movies, VHS, websites

Monday, May 19, 2014

#sundownonlaketown


from Instagram: http://ift.tt/1lXaNWu


http://ift.tt/1mQh5bl http://ift.tt/1sNzQMB Instagram, #sundownonlaketown

#stedwards #pineshadows


from Instagram: http://ift.tt/TofZcv


http://ift.tt/1sNzOEl http://ift.tt/1mQh6Mq Instagram, #stedwards #pineshadows

Sunday, May 18, 2014

That Night we Heckled Neil deGrasse Tyson

Do you remember that one night where my buddy Martin and I heckled Neil deGrasse Tyson and Wil Wheaton? I barely do, that was a ton of fun a long while back. What a crazy night! Anyway, I recently found the video from that night, including Martin’s drunken exclamation about Ringworld being unstable. Enjoy!

Would you believe I’ve been a NGT fan since around 2005? Guess that makes me a science geek hipster.



http://ift.tt/S8lhrC http://ift.tt/eA8V8J Popular Culture, Science, astronomy, geekdom, hipster, Neil DeGrasse Tyson, ngt, science, science culture, space, startalk, startalk radio, video, wil wheaton

Friday, May 16, 2014

Functional Federated Modeling – This is What the Internet Could Be!

Leading Note: There are dozens of  amazing ways to scale massively multi-user simulations using clever network technologies. We could create virtual worlds and distributed data hives beyond imagining. However, all of this is threatened by the FCC’s decision to allow internet fast lanes. Progress will be set back decades or shipped overseas, killing net-neutrality would impair American economic growth, national security, and  national science. We are on the precipice of transforming the internet into its full potential, and ISPs would prevent this to protect broadcast media, sinking the American Future with it.

Building Bigger Virtual Worlds

In order to serialize and efficiently model complex simulations, computation needs to be broken down into several parts. Parallelization is nothing new, and neither is the concept of assigning functional components to simulations over a network. What varies from project to project is how this division takes place, and what defines the purpose of the sub-components. Firstly, I’m going to summarize to the best of my ability, the manner in which distributed simulation architecture currently works. I will also speculate on how these somewhat separate concepts could be utilized in such a manner as to vastly expand the capabilities of this type of simulation network.

The purpose of my overview stems from the depth of study which I have undertaken independently since the winter of 2012. I started the /r/Simulate community, which pooled resourced and started on a very compelling coding initiative. The work done by my cohort cannot be understated, I am blown away the level of thought which Aaron Santos put into the MetaSim API. While I never have been anywhere near as good of a developer as Aaron and team, my investigation has instead been focused on become a subject matter expert for all technologies or strategies revolving around distributed simulation.

This lead to a contractual role with Magnetar Games, from which I learned magnitudes of new concepts from its CEO, Duncan Suttles. His expertise in simulation architecture spans decades, ranging from the inner-working of a run-time rendering client, to the layers of protocol required to have resource exchange between parts. From the beginning, his group has been working with Department of Defense standards such as SEDRIS, and pioneering its XML implementation with the additional requirements of High Level Architecture (HLA). The intended plan of action for the Magnetar Team is to integrate these simulation schema standards into the fabric of a distributed run-time infrastructure, which would eventually be used to describe scenes in popular virtual world server systems.

So much intellectual labor has gone into the design of simulation mechanisms which make maximum use of systems in the most efficient ways possible. The exciting new element of our current paradigm, however, is the rising popularity of in-browser graphics. Whereas a browser with WebGL is not at all required to construct a distributed simulation on the web, it does add a massive gain in the ability to recycle domain elements such as CSS to quickly create interactive components. Moreover, it adds an ease of integration for other web services. There is some debate as to whether the king of WebGL will be native-JS or LVVM; however, I believe both offer solutions to different problems.

Ultimately though, WebGL simply is the icing on the cake, the biggest highlight of progress will come in our ability to generate procedural content across massively parallel systems in real time. By segmenting aspects of simulation, utilizing grids, and defining flexible evolving standards, the world will be introduced to simulations of persistently greater complexity. Not only will this complexity boost exist in graphics or immersive peripherals, but also in the resource available for neural networks, vast agent systems and more believable procedural narratives. The holy grail of simulation isn’t just a better physics model, it’s also creating the best holistic model possible. The only way to predict the unexpected is to remove the boundary conditions of your model, the value gains for doing this will be tremendous.

 

Established Simulation Network Designs

Run Time Infrastructure & the FOM

The concept known as “Run-Time Infrastructure” or RTI, is a form of middleware which is used to handle segmented components of a simulation called “federates.” An RTI is a design principle which describes a system deployment, there’s no single way to implement one. It’s a specification and has no requirements regarding network protocol, object storage, or programming language. Modern implementations typically need to compile with the current IEEE standards or HLA specifications which I’ll touch on in the next section. The most basic concept to grasp about an RTI though, is that its acts as the “live” component of a simulation, handling all of the dependent parts and generally managing the simulation pieces that the application client will use. The RTI talks to humans.

1000px-RTI.svg
“Below” the RTI, you find  a collection of federates which are treated in a relational-entity type system called a “Federated Object Model” or FOM. These federated components are each reusable and generally are split up according to function. An example might be the image below showing an RTI designed by NASA. One component of the federation would model the controls or occupants of a spacecraft. Another module would keep track of the orbital position, speed, and Keplerian elements of the spacecraft’s orbit. Since the craft is designed to measure the Earth’s magnetic field, you would also have a federate which models the size, shape, and geomagnetism of the Earth.

Now, not all If you wanted to, you could even split the Earth federate into smaller sub-federates and smaller API chunks. In this example, they have some service or worker pulling Magnetic Field Data from some stored source or external pipeline. Historically, most of the federates within an RTI need to be aware of each other at compile time, but differing approaches have multiple solutions to this issue. Partial federates and ad-hoc federates are one solution I cover later. The fundamentals to be remembered about an RTI are that nodes perform functions and networks of these nodes communicate to the client.

 

RTI-Nasa

High Level Architecture (HLA) & Distributed Interactive Simulation (DIS)

Both HLA and DIS are Department of Defense (DoD) DARPA sponsored initiatives which are now maintained by the IEEE standards organization. The difference between the two exist at the protocol level, within the messaging layer. DIS requires use of the UDP channel whereas HLA simply offers a packet specification and is agnostic to the transport.

However, despite finding a million papers characterizing the use cases and overall abstract concepts behind HLA, it is difficult to find any functional examples of an HLA RTI deployment. The best I could find was a white paper on the “Umbra” system defined by Sandia labs.

From what I’ve managed to gather, when an RTI uses an “ambassador” that functions as a web worker or timed function that invokes function or network calls between the federates. This occurs at regular intervals, and works to connect objects to an ambassador that connects to the RTI, which connects to the clients.

HLA-example HLA-example2

Now, from a game or web developers standpoint, I’m not entirely convinced that these standards need to be implemented across the web. Firstly, it is not free to use these standards. HLA compliance is a certification scheme run by the SISO and the US Military, which is great for well-backed defense contractors but terrible for small open source teams trying to model spacecraft missions like I was in the NASA SpaceAppChallenge.

Either the SISO should release a free-tier version of HLA rules, and a pay to use military version… Or a free alternative should be constructed which is open access to everyone and doesn’t require certification. You can buy access here. We live in the era of social coding and open collaborative projects, the best way to promote interoperability is to make standards free and easy to use.

Data Distribution Service (DDS)

HLA isn’t the only available standard for distributed data. Both offer the “publish-subscribe” architecture with decentralized communication. This “middleware” layer is a translator and routing service which can send essentially two types of messages:

  • Signals - “Best-Effot” continuously changing data. (Sensor readings for example)
  • Streams - Snapshots of data-object values which update states (which are attributes or modes of the data)

The difference here with DDS though, is that it has a quality of service (QoS) call built in to ensure that messages are transported to destination. HLA still has this, but with DDS the QoS can be switched between a few different modes. Moreover, DDS is built around the idea of the keyed topic, for building custom data constructs. While HLA is centered around the Object Model Template (OMT), DDS focuses on the Data Local Reconstruction Layer (DLRL) and the Data-Centric-Publish-Subscribe (DCPS).

So HLA is based on defining object hierarchies, DDS is built for structuring more abstract data configurations. Both are built to create distributed models across a network using an RTI.

DCPS

http://ift.tt/1oyd11A

 

Scene Building

Static Asset Conversion

Rather than keeping all the assets in the run time format, it is easier to store objects into a passable storage format. This has been the common approach for most systems for nearly every solution in the post 1 GB memory era. However, not all asset formats are created equal, and all need some form of special consideration top be used in a run time environment. Here is the implementation of static assets converted into a usable graphics library format.

glTF

 

Distributed Scene Graph (DSG)

The DSG is an effort by Intel to create dynamic scene components which have a scheduled sync, and then send bundled assets to a client manager which also renders the scene. Or, it handles the scene and pushes an update to the client to render the scene. Moreover, this system is built to scale for thousands of users. It manages a client directory through a set of networked hosting servers.

To some extent, this becomes dependent on the middleware which is the scene update propagation service. However, by chunking out the scene components by scene functional components and by region, and having system states updated between concurrent client managers, you are able to serve a lot of clients really quickly with the same data.

DSGArch

http://ift.tt/1oycZa3

Modern FOMs & Node Networks

So enter now the experimental and iteratively more complex types of distributed simulation networks. What HLA and DDS offer are tried and true schools of thought on building systems that need to work on a patchy, high latency network. They were conceived before broadband was widely available, so since then, a lot of new approaches to slicing up simulation pieces of the cake have been built.

Partial Federations

One notion which I am partial to, pun intended, is the idea of partial federations. Initially with a FOM, all of the chunks need to be declared in the beginning (usually compilation) so that all of the different parts are able to talk to one another. What some people have speculated on however, is the concept of modularized federates which offer functional components only as required.

This means that you might start with a base layer of core pieces which are needed for the RTI to run at all. On top of that you have minimum required modules which would have the base language to host secondary “stacked” components. Think about this from the perspective of automata thinking. You define a base language or compiler, than simply have modules which can be loaded or discarded based on need.

This maximizes compute efficiency and frees up memory to handle dynamic data instead of the data model. Here’s a cool diagram showing these dependency stacks as lego blocks:

dependantFOMs

http://ift.tt/1nXRGw0

RESTful System to Build an RTI

HLA & DDS are for the most part agnostic to the transport itself. They simply define what should be transported. Any FOM can use UDP, TCP, or whatever else works best. In the case of the modern web, the king of the application layer is HTTP. Asynchronous apps can be built using TCP web sockets. This is the basis for nearly all of the web APIs in use, and it has already been extended to be used in an RTI countless times. This is even is the approach used by the rSimulate MetaSim platform.

There is a fully developed product out there called WebLVC which is an out of the box RESTful RTI that deploys HLA. It’s commercially available, mostly marketed to the defense

webLVC

Future Architectures

Distributed Autonomous Ad-Hoc Grids

Firstly, in order to create truly astounding grids of the future, we are going to need much more cloud resource freely available and cheap to comeby. Cryptocurrencies with Grid Computing built in might offer the best solution to this challenge. Having readily available grid computing or cluster systems available based on need, would do wonders for creating more intricately rich simulations.

autonomicity

http://ift.tt/1nXREEw

Nested Model Levels

Taking the principles of a nodal federate system, it is possible to treat each member of the federate as it’s own independent set of sub-federates. One node in the simulation manager could be weather modeling, which could in turn have its own associative grid for massively parallel computation. So long as the time synchronicity between the overall node doesn’t need constant in-memory updates from the dependent grid, you are clear to create as many grid federates as you can.

Now, if you split a nested simulation with independent grid nodes, then you would become bottle-necked by the centralized node to relay state changes to the subsequent consumer grid parts. However, if in advance, you have a language of communication between sub-node components in different master-federates, you could arrange for faster communication between sub-node pieces.

For example, you might have a massive grid simulating weather and another one simulating wildlife. Rather than having a message bottleneck between a centralized-state weather model and a centralized-state plant model, you could partition each grid by geography, and then have those two partitions exist on the same local memory.

Grids Doing Different Things as Needed with Layered Abstraction

Consider the Copenhagen interpretation of quantum mechanics. Nothing events occur unless an observer is present. However, we can predict the frequency of events and thereby can do stochastic modeling in leveled tiers. Nobody hears the tree fall in the forest, but we can estimate how many trees fell in the last year. We can take guesses at which areas lost the most trees because we have another model that describes wind patterns on the mountain in question.

Or take another example: the street of Venice. There are 20 million tourists who visit there each year. Do you need to model every single person on those streets, right down to their neural network to determine their actions? You could, but it’s a lot lest cost-intensive to simply model their behavior probabilistically. Use an agent model to simulate people’s motions. Maybe give them a simple finite state machine and have that live in memory as part of the federate. You can predict where they will go based on what needs they currently require most.

venice2

Now of course, you can’t have a conversation with an FSM that thinks like an ant. For this, we need a “model-transition.” This quick swap between the FSM and a real neural network would give you the feeling of immersion among real people, without expending the cost of requiring real people. The idea would be that you have an AI that plays an actor for every person on the streets your character interacts with, or comes into proximity of. This type of federate switching could give tremendous gains in realism with marginal increase on CPU time. I call it “Probabilistic Entangled Agents” or PEAs, but I’ll have to ellaborate more on this concept at a later time.

RTI with Intelligent Package Distribution

The strength of NodeJS or Python PIP, or even .NET nuget,  is the ability to list dependencies and versions, but also the ability to quickly install them. What is needed in an idealy something like this written into the DNA of the wider RTI infrastructure. Docker.io and Vagrant offer VM imaging and custom deployments which have prerequistites already installed and configured. Even using Amazon EC2 lets you pick a custom image to be deployed based on need.

Moreover, most of these VPS image services also have service APIs that let you deploy servers with just a RESTful call. The future of an autonomous scalable simulation would be an RTI that has a nuclear federate seed written at the operating stack level. It becomes ubiquitous to operating systems, or at least very simple to install. You bundle this with a set of value attribution stacks like Gridcoin, and then have a set of install-able dependencies which can deployed on an as-needed basis.

This might be created as another API on top of REST, it might even be it’s own protocol channel, but ideally, you could convert free assets of the internet to be reconfigurable to demand as requested by a universe of services. These services would trigger automatic dependency services, and then scale in complexity in accordance to the value of investment being pumped in. Something like DAC’s and Ethereum might compliment this nicely.

The funnest use-case scenario is you strap on your VR goggles, and decide you want to spool up a virtual world to play a historical simulation game. You give it inputs on game duration, number of players, and level of immersion, it spits back a set of  tiered choices based on the desired complexity and the resource it takes to empower that complexity. Now, if instead of paying for the network traffic to a massively centralized ISP, you get to invest in a distributed network of git submodules and a multitude of cloud+grid components…. Well suddenly you are participating in resource distribution based on the creation of real value and user empowered content.

thedeep

The Resource Dilemma & the FCC

All of these monumental design paradigms offer novel ways to experience the internet and would create demand for the next significant push of Moore’s law. It would allow for holistic simulations which model everything from climate change to economic agent models with billions of independent simulated agents. We could test policy in virtuo before signing it into law. However, the capabilities of these ever expansive systems still depends on the computational and network resources available to it.

Imagine a single processor running the agent behaviors of a single complex agent, and then imagine the number of networked API calls to relay system state awareness to all dependent federates. Consider a billion grid services generating data at a constant rate, trying to communicate with a few hundred different federate models (wind patterns, air quality, crop health, soil conditions). You would have a very large chain of interdependent components which might need to send massive amounts of data between components. The same is true of the internet of things, which could be co-opted into a global awareness simulation model. This could be petabytes or exabytes of data between

So here we have a conflict with the vision of the future as laid out by Comcast lobbyists. The internet is in its biggest transitional period to date. We finally are reaching the point where technology is allowing a site to become a “place.” Technologies such as a federated RTI or a  massive parallel grid offer the potential to turn our dumb apps into a series of intelligent components each fulfilling an important function. However, if the FCC does not rule the internet as a common carrier telecom service, the cost of communications will sky rocket and it will set-back our ability to evolve the internet. Other countries will fill the void and America will become a technological backwater where only consumption driven applications have the money to pay for the traffic between functional nodes.

It would be a world where we have the power to model the entire Earth through holistic simulation to fight climate change and global conflict, but instead find that the only data “bailiffs” with the financing to build these systems are marketeers and major online retailers trying to siphon every last drop of cash from a consumer base which has no surplus to spend. If data rates between remote web services are charged the same way that ala cart B2C rates would be charged by Comcast… We would find that not only that America’s future is desperately limited, we also would force all but the big players to fold.

The startup economy would shrivel within a few years and youth unemployment rates might resemble Spain by the end of the decade. If Tom Wheeler approves a tiered internet, his actions comprise nothing short of committing treason. His damages would make the impact of the NSA leaks look like pocket change, and would potentially compromise or magnify the costs of hundreds of projects currently curated by the Defense Department. Making the non-commercial internet more expensive rather than less would grind our economy to a halt and threaten national security.

Will the future of the internet be one with massively distributed parallel simulations and emergent media? Or will the the future of the internet be the destruction of the worlds most useful invention to serve as a glorified cable television delivery system.

fcc



http://ift.tt/1oycYD1 http://ift.tt/1nXRFIx MetaSim, Project, Projects, rSimulate, Simulation, Simulations, Technology, Uncategorized, architecture, cluster, CPU, cryptocurrency, DAC, Darpa, distributed, DoD, DSG, FCC, federate, FOM, future, gaming, gltf, graphics, grid, HLA, middleware, network design, occulus, opensim, parallel, RTI, SEDRIS, simulation, technology, VR, web, webgl

Tuesday, May 13, 2014

#seattlesun


from Instagram: http://ift.tt/SZlUVm


http://ift.tt/1jYIaYJ http://ift.tt/1jq0Hsq Instagram, #seattlesun

Wednesday, May 7, 2014

/r/Futurology becomes default!

Hi Readers! I don’t have too much time to spend writing this today since I’m working on a grant for Asteroid.Ventures, but a community I have helped shape recently achieved a major milestone today! For the last two-ish years, I’ve been a moderator on the sub-reddit called Futurology, and today (5/7/2014), we are being indoctrinated as a default sub-reddit. (Which means new reddit users will be automatically subscribed.)

ology

The community itself has certainly grown. It started in mid-2012, and really started picking up steam around Q4 of that year. Which I think is about where I became involved. Having been well read with loads of science fiction and enamored with the concept of the technological singularity, it was refreshing to find a host of people who shared (mostly) the same forward thinking ideology.

Concurrently, work in Futurology prompted me to make my own community, /r/Simulate. Which is focused on sharing all things related to simulation, be it academic, industrial, gaming, or conceptual like A-Life. While I don’t expect /r/Simulate to ever have the staying power of Futurology, I do greatly appreciate the hacker-culture we’ve put together there.

Back to Futurology though, it really has been impressive to watch the subscriber base increase so dramatically:

redditfuturology

Granted, a lot of the current growth might have been attributed to the recent “technology-gate,” where events conspiring around keyword censorship actually made international network news. The trouble was started when a user first noticed that /r/technology had been censoring Tesla posts. /u/Gamion made this post to the Futurology page, for which I had this response. We were in a really tricky situation here, we didn’t want to encourage turning Futurology into witch hunt against Technology, but we still wanted to keep our discussion platform open and transparent.

This later gave rise to /u/Multi-Mod starting the transparency page and domain blacklist to ensure that going forward, there was no miscommunication with users about what should and should not be deleted. The most contentious thing right now is the dividing line between spammers and real content creators. The reddit has policy not to share more than 10% of your own domains, but if you share more than 10% of content from a major news site, that is acceptable. However, we’ve even seen that blog authors at certain outlets like the Huffington Post are prone to blatantly spam also, and people who post only their own Youtube videos also can be ousted for spam. To abide by the stricter policies,  I won’t post my blog to Futurology anymore.

However, I think a more pertinent measure than a 10:1 ratio for submissions would be a 25:1 ratio in terms of comments to submissions. Reddit’s strength is in its comments, and many content creators don’t want to spend all day posting content in order to be allowed to share. Many content producers see this as a “hurdle” to jump in order to self-promote. Really though, I think there are two forms of self-promoters, content producers who want bonafide feedback on their work by means of reddit comments, and promoters running some SEO nonsense trying to pull in ad-money. Most non-news blogs receive under 20k views a year, which means about $10 every two years through google adsense. By favoring major news outlets over small private blogs, there is some sense of big corporate bias that just feels a slight bit unsettling. (This is a reddit-wide thing not  a Futurology thing.)

If you want to follow some Futurology related blogs and help share their voice, please follow these guys and share their content for them (if you like their stuff):

Now, back to topic, one thing that’s interesting is to watch the data show the fallout from technology-gate: (you can see the dramatic rise in /r/tech and /r/technews)

techcompare

Of course, Futurology received an influx of subscribers as well, but our rank was already pretty high at that stage. We’ve been climbing through the ranks for quite some time, hitting the top 200 in January and becoming the daily #1 non-default several times prior to our inclusion as becoming a proper default. At this rate, I would not be surprised if our subscriber base surpassed 1 million by the end of Q3 2014.

milestones

Of course, with great power comes great responsibility, and a ton more work. Moderation used to be all fun and games, now it’s become a constant debate over what to delete and who shouldn’t be deleted when. To assist, Multi-Mod installed the Auto-Moderator bot used by many of the defaults, but we’ve also set most features to “report” posts instead of auto-deletion. Humans working with Robots instead of against them or for us… That’s really what Futurology is all about.

Moreover, we have a surplus of secondary subreddits which support the primary one:

It is interesting to see Futurology starting to branch out and function as a type of “Virtual State.” Mainly because we’re always listening to our own echo chamber that promotes virtual policy participation.

Which brings me to the final promotion I need to make, /u/Xenophon1 who started Futurology also started the “Futurist Party!” Will we ever amount to any real political force? Who knows, maybe if we institutionalize something now we will hold some power a generation from now (around 2030). Our platform looks something like left libertarianism with a strong emphasis on open source technology and space exploration. We think that automation threatens the traditional taxed labor model, and that poor regulation of the internet (Looking at you Tom Wheeler) will destroy our economic surplus. This also caused me write the Nucleus Proposal as a technological means to connect motivated workers with interesting projects and necessary monetary resource. I really need to revamp /r/Nucleus, and have had some help from /u/EdEnlightenU.

So that’s it for now, my hour and a half post about all the big changes on reddit and Futurology! (I could have written this as a self-post on the sub itself, but I wanted Xenophon1 or MouthSpeakWords to have that honor.) Now the shameful part, if you like my work you should give me a $1 monthly donation with Gittip! I’ve been going for broke with Asteroid.Ventures and could use the help!

Cheers!



http://ift.tt/1jBo4ne http://ift.tt/1jBo4ng Uncategorized, admins, analysis, asteroid ventures, blogs, future, futurist, futurology, metrics, moderation, nucleus, politics, reddit, simulate, space, technology

Monday, May 5, 2014

#tastytrout


from Instagram: http://ift.tt/1uqvcaO


http://ift.tt/1iiPwBn http://ift.tt/1j2ZiNk Instagram, #tastytrout

Sunday, May 4, 2014

#fishingspree #troutsnshit


from Instagram: http://ift.tt/RiQ5VW


http://ift.tt/Sq2t7E http://ift.tt/Sq2qIY Instagram, #fishingspree #troutsnshit

Friday, May 2, 2014

#capnkenzie #edmondsbeach #rainbowdays


from Instagram: http://ift.tt/1fCZ3ZE


http://ift.tt/1mnz8Im http://ift.tt/1kDEye5 Instagram, #capnkenzie #edmondsbeach #rainbowdays