I would like to suggest a new word.
Anthropocosmos, n. and adj. Chiefly with "the." The epoch during which human activity is considered to be a significant influence on the balance, beauty, and ecology of the entire universe.
Based on ...
Anthropocene, n. and adj. Chiefly with "the." The era of geological time during which human activity is considered to be the dominant influence on the environment, climate, and ecology of the earth. --The Oxford English Dictionary
As we become painfully aware of the extent to which human activity is influencing the planet and its environment, we are also accelerating into the epoch of space exploration. Not only will our influence substantially affect the future of this blue dot we call Earth, but also our never-ending desire to explore and expand our frontiers is extending humanity's influence on the cosmos. I think of it as the Anthropocosmos, a term that captures the idea of how we must responsibly consider our role in the universe in the same way that Anthropocene expresses our responsibility for this world.
The struggle to protect the commons--the public spaces and resources we all depend on, like the oceans or Central Park--is not a new problem. Shepherds grazing sheep on shared land without consideration for other flocks will soon find grass growing thin. We already know that farming and the timber industry deplete the forests, and the destruction of that commons in turn affects the commons that is the air we breathe. These are versions of the same problem--the tragedy of the commons. It suggests that, left unchecked, self-interest can deplete resources that support the common good.
Joi Ito is an Ideas contributor for WIRED, and his association with the magazine goes back to its inception. He is coauthor with Jeff Howe of Whiplash: How to Survive Our Faster Future and director of the MIT Media Lab.
The early days of the internet were an amazing example of people and organizations from a variety of sectors coming together to create a global commons that was self-governed and well-managed by those who built it. Similarly, we're now in an internet-like moment in which we can imagine an explosion of innovation in space, our ultimate commons, as nongovernment groups, companies, and individuals begin to drive progress there. We can learn from the internet--its successes and failures--to create a generative and well-managed ecosystem in space as we grow into our responsibility as stewards of the Anthropocosmos.
Like the internet, space exploration has been mostly a government-vs.-government race and a government-with-government collaboration. The internet started out as Arpanet, which was funded by the Department of Defense's Advanced Research Projects Agency and operated by the military until 1990. A great deal of anxiety and deliberation went into the decision to allow commercial and nonresearch uses of the network, much as NASA extensively deliberated over opening the doors to "public-private partnership" leading up to the Commercial Crew Program launch in 2010. This year is the 50th anniversary of the Apollo 11 mission that put men on the moon, a multibillion-dollar effort funded by US taxpayers. Today, the private space industry is robust, and private firms compete to deliver payloads, and soon, put people into orbit and on the moon.
The state of the development of the space industry reminds me of where the internet was in the early '90s. The cost of putting a satellite into orbit has gone from supercomputer-level costs and design cycles to just a few thousand dollars, similar to the cost of a fully loaded personal computer. In many ways, SpaceX, Blue Origin, and Rocket Lab are like UUNET and PSINet1 --the first commercial internet service providers--doing more efficiently what government-funded research networks did in the past.
1 Disclosure: I was at one point an employee of PSINet and the CEO of PSINet Japan.
When these private, for-profit ISPs took over the process of building out the internet into a global network, we saw an explosion of innovation--and a dot-com bubble, followed by a crash, and then another surge following the crash. When we were connecting everyone to the internet, we couldn't imagine all the possible things--good and bad--that it would bring. In the same way, space development will most likely expand far beyond the obvious--mining, human settlements, basic research--to many other ideas. The question now is, how can we direct the self-interested businesses that will undoubtedly power entrepreneurial expansion, growth, and innovation in space toward the shared, long term health of the space commons?
In the early days of the internet, everyone pitched in like people tending a community garden. We were a band of jolly pirates on a newly discovered island paradise far away from the messiness of the real world. In "A Declaration of the Independence of Cyberspace," John Perry Barlow even declared cyberspace a new place, saying "We are forming our own social contract. This governance will arise according to the conditions of our world, not yours." His utopian idea, which I shared at the time, is now echoed by some of today's spacebound entrepreneurs who dream of settling Mars or deploying terraforming pods on planets across the galaxy.
While it wasn't obvious how life on the internet would play out when we were building the early infrastructure, back then academics, businesses, and virtually anyone else who was interested worked on its standards and resource allocation. We created governance mechanisms in communities like ICANN for coordination and dispute resolution, run by people dedicated to the protection and flourishing of the internet commons. In short, we built the foundations on which everyone could develop businesses and communities. At least in the beginning, the internet effectively harnessed the self-interest of commercial players and money from the markets to develop open protocols, free for everyone to use, that the communities designed. In the early 1990s, the internet was one of the best examples of a well-managed commons, with no one controlling it and everyone benefiting from it.
A quarter-century on, cyberspace hasn't evolved into the independent, self-organized utopia that Barlow envisioned. As the internet "democratized," new users and entrepreneurs who weren't involved in the genesis of the internet joined. It was overrun by people who didn't think of themselves as pirate gardeners tending the sacred network that supported this idealistic cyberspace--our newly created commons. They were more interested in products and services created by companies, and these companies often didn't care as much about ideals as in making returns for their investors. On the early internet, for example, people ran their own web servers, and fees for connectivity were always flat--sometimes simply free--and almost all content was shared. Today, we have near-monopolies, walled garden services; the mobile internet is metered and expensive; and copyright is vigorously enforced. From the perspective of this internet pioneer and others, cyberspace has become a much less hospitable place for users as well as developers, a tragedy of the commons.
Such disregard for the commons, if allowed to continue into planetary orbit and beyond, could have tangibly negative consequences. The decisions we make in the sociopolitical, economic, and architectural foundations of Earth's near-space cocoon will directly impact daily life on the surface--from debris falling in populated areas to advertisements that could block our view of the skies. A piece of space junk has already hit a woman in Oklahoma and an out-of-control Chinese space station caused a lot of anxiety and luckily fell harmlessly into the Pacific Ocean.
So I think the rules and governance models for space are extremely important to understand to mitigate known problems such as space debris, set precedents for the unknown, and managing the race to lunar settlements. We already have the Outer Space Treaty, which governs our efforts and protects our resources in space as a shared commons. The International Space Station is a great example of a coordinated effort by many competing interests to develop standards and work together on a common project that benefits all participants.
However, recent announcements by Vice President Mike Pence of an "America First" agenda for the moon and space fail to acknowledge the fact that the US pursues space exploration and science with deep coordination and interdependence with other countries. As new opportunities are emerging for humans to develop economic activities and communities in orbit around the Earth, on asteroids, and beyond, nationalistic actions by the Trump administration could undermine the opportunity to pursue a multiple stakeholder, internationally coordinated approach to designing future human space activities and ensure that space benefits all humankind.
As space becomes more commercial and pedestrian like the internet, we must not allow the cosmos to become a commercial and government free-for-all with disregard for the commons and shared values. In a recent Wall Street Journal article, Media Lab PhD student and director of the Media Lab Space Exploration Initiative2 Ariel Ekblaw suggested we need a new generation of "space planners" and "space architects" to coordinate such expansive growth while enabling open innovation. Through such communities, we can build the space equivalents of ICANN and the Internet Engineering Task Force, in coordination with international policy and governance guidance from the UN Office for Outer Space Affairs.
Disclosure : I am one of the two principal investigators on this initiative.
I am hopeful that Ariel and a new generation of space architects can learn from our successes and failures in protecting the internet commons and build a better paradigm for space, one that will robustly self-regulate and allow growth and generative creativity while developing strong norms that help us with our environmental and societal issues here on Earth. Already there are positive signs: SpaceX recently decided to fly low to limit space debris.
Fifty years ago, America "won" the moonshot. Today, we must "win" the Earthshot. The internet connected our world like never before, and as the iconic 1968 Earthrise photo shows, space helps us see our world like never before. Serving as responsible stewards of these crucial commons profoundly expands our circles of awareness. My dear friend Margarita Mora often asks, "What kind of ancestors do we want to be?" I want to be an ancestor who helped make the Anthropocene and the Anthropocosmos periods of history when humans helped the universe flourish with life and prosperity.
Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of Government except all those other forms that have been tried from time to time.
--Winston Churchill
I was on the board of the International Corporation for Names and Numbers (ICANN) from 2004 to 2007. This was a thankless task that I viewed as something like being on jury duty in exchange for being permitted to use the internet, upon which much of my life was built. Maybe people hate ICANN because it seems so bureaucratic, slow, and political, but I will always defend it as the best possible solution to something that is really hard--resolving the problem of allocating names and numbers for the internet when every country and every sector in the world has reasons for believing that they deserve a particular range of IP addresses or the rights to a domain name.
I view the early architecture of the internet as the most successful experiment in decentralized governance. The internet service providers and the people who ran the servers didn't need to know how the whole thing ran, they just needed to make sure that their corner of the internet was working properly and that people's email and packets magically found their way around the internet to the right places. Almost everything was decentralized except one piece--the determination of the unique names and numbers that identified every single unique thing connected to the internet. So it makes sense that this is the thing that was the hardest thing to do for the open and decentralized idealists there.
After Reuters picked up the news on May 20 that ICANN handed over the top level domain (TLD) .amazon to Jeff Bezos' Amazon.com, pending a 30 day comment period, Twitter and the broader internet turned into a flurry of conversations criticizing the ICANN process. It brought out all of the usual conspiracy theorists and internet governance pundits, which brought back old memories and reminded me how some things are still the same, even though much on the internet is barely recognizable from the early days. And while it made me cringe and wish that the people of the Amazon basin had gotten control of that TLD, I agree with ICANN's decision. I remembered my time at ICANN and how hard it was to make the right decisions in the face of what, to the public, appeared to be obviously wrong.
Originally, early internet pioneer Jon Postel ran the root servers that managed the names and numbers, and he decided who got what. Generally speaking, the rule was first come first serve, but be reasonable about the names you ask for. A move to design a more formal governance process for managing these resources began as the internet became more important and included institutions such as the Berkman Center, where I am a faculty associate. The death of Jon Postel accelerated the process and triggered a somewhat contentious move by the US Commerce Department and others to step in to create ICANN.
ICANN is a multi-stakeholder nonprofit organization originally created under the US Department of Commerce that has since transitioned to become a global multi-stakeholder community. Its complicated organizational structure includes various supporting organizations to represent country-level TLD organizations, the public, businesses, governments, the domain name registrars and registries, network security, etc. These constituencies are represented on the board of directors that deliberates on and makes many of the key decisions that deal with names and numbers on the internet. One of the keys to the success of ICANN was that it wasn't controlled by governments like the United Nations or the International Telecommunications Union (ITU), but that the governments were just part of an advisory function--the Government Advisory Council (GAC). This allowed many more voices at the table as peers than traditional intergovernmental organizations.
The difficulty of the process is that business and intellectual property interests believe international trademark laws should govern who gets to control the domain names. The "At Large" community, which represented users, has other views, and the GAC represents governments who have completely different views on how things should be decided. It's like playing with a Rubik's cube that actually doesn't have a solution.
The important thing was that everyone was in the room when we made decisions and got to say their say and the board, which represented all of the various constituents, would vote and ultimately make decisions after each of the week-long deliberation sessions. Everyone walked away feeling that they had their say and that in the end, they were somehow committed to adhere to the consensus-like process.
When I joined the board, my view was to be extremely transparent about the process and to stick to our commitments and focus on good governance, even if some of the decisions made us feel uncomfortable.
During my tenure, we had two very controversial votes. One was the approval of the .xxx TLD. Some governments, such as Brazil, thought that it would be a kind of "sex pavilion" that would increase pornography on the internet. The US conservative Christian community engaged in a letter-writing campaign to ICANN and to politicians to block the approval. The ICM Registry, the company proposing the domain, suggested that .xxx would allow them to create best practices including preventing copyright infringement and other illegal activity and create a way to enforce responsible adult entertainment.
It was first proposed in 2000 by the ICM Registry and resubmitted in 2004. They received a great deal of pushback and continued to fight for approval. In 2008, ICM filed an application with the International Centre for Dispute Resolution and the domain came up for vote again in 2009, when I was on the board. The proposal was struck down in a 9 to 5 vote against the domain--I voted in the minority, in favor of the proposal, because I didn't feel that we should deviate from our process and allow political pressure to sway us. Eventually, in 2011, ICANN approved the .xxx generic top-level domain.
In 2005 we approved .cat for Catalan, which also received a great deal of criticism and pushback because the community worried that it would be the beginning of a politicization of TLDs by various separatist movements and that ICANN would become the battleground for these disputes. But this concern never really manifested.
Then, on March 10, 2019, the board of ICANN approved the TLD .amazon, against the protests of the Amazon Cooperation Treaty Organization and the governments of South America representing the Amazon Basin. The vote was the result of seven years of deliberations and process, with governments arguing that a company shouldn't get the name of geographic region and Jeff Bezos' Amazon arguing that it had complied with all of the required processes.
When I first joined MIT, we owned what was called net 18. In other words, any IP address that started with 18. The IP addresses 18.0.0.1 through 18.255.255.254 were all owned by MIT. You could recognize any MIT computer because its IP address started with 18. MIT, one of the early users of the internet, was allocated a whole "class A" segment of the internet which adds up to 2,147,483,646 IP addresses--more than most countries. Clearly this wasn't "fair," but it was consistent with the "first come first serve" style of early internet resource allocation. In April 2017, MIT sold 8 million of these addresses to Amazon and broke up our net 18, to the sorrow of many of us who so cherished this privilege and status. This also required us to renumber many things at MIT and turn our network into a much more "normal" one.
Although I shook my fist at Amazon and capitalism when I heard this, in hindsight the elitist notion that MIT should have 2 billion IP addresses was also wrong and Amazon probably needed the addresses more.
So it was with similar ire that I read the tweet that said that Amazon got .amazon. I've been particularly involved in the protection of the rights of indigenous people through my conservation and cultural activities and my first reaction was that, yet again, Western capitalism and colonialism were treading on the rights of the vulnerable.
But then I remembered those hours and hours of deliberation and fighting over .xxx and the crazy arguments about why we couldn't let this happen. I also remember fighting until I was red in the face about how we needed to stick to our principles and our self-declared guidelines and not allow pressure from US politicians and their constituents to sway us.
While I am not close to the ICANN process these days, I can imagine the pressure that they must have come under. You can see the foot-dragging and years of struggle just reading the board resolution approving .amazon.
So while it annoys me, and I wish that .amazon went to the people of the Amazon basin, I also feel like ICANN is probably working and doing its job. The job of ICANN is to govern the name space in an open and inclusive process and to steward this process in the best, but never perfect, way possible. And if you really care, we are in that 30 day public comment period so speak up!
This column is the second in a series about young people and screens. Read the first post, about connected parenting, here.
When I was in high school, I emailed the authors of the textbooks we used so I could better question my teachers; I spent endless hours chatting with the sysadmins of university computer systems about networks; and I started threads online for many of my classes where we had much more robust conversations than in the classroom. The first conferences I attended as a teenager were conferences with mostly adult communities of online networkers who eventually became my mentors and colleagues.
I cannot imagine how I would have learned what I have learned or met the many, many people who've enriched my life and work without the internet. So I know first-hand how, today, the internet, online games, and a variety of emerging technologies can significantly benefit children and their experiences.
That said, I also know that, in general, the internet has become a more menacing place than when I was in school. To take just one example, parents and other industry observers share a growing concern about the content that YouTube serves up to young people. A Sesame Street sing-along with Elmo leads to one of those weird color ball videos leading to a string of clips that keeps them glued to screens, with increasingly stranger-engaging content of questionable social or educational value, interspersed with stuff that looks like content, but might be some sort of sponsored content for Play-Doh. The rise of commercial content for young people is exemplified by YouTube Kidfluencers, which markets itself as a tool that gives brands using YouTube "an added layer of kid safety," and their rampant marketing has many parents up in arms.
In response, Senator Ed Markey, a longtime proponent of children's online privacy protections, is cosponsoring a new bill to expand the Children's Online Privacy Protection Act (COPPA). It would, among other things, extend protection to children from age 12 to 15 and ban online marketing videos targeted at them. The hope is that this will compel sites like YouTube and Facebook to manage their algorithms so that they do not serve up endless streams of content promoting commercial products to children. It gets a little complicated, though, because in today's world, the kids themselves are brands, and they have product lines of their own. So the line between self-expression and endorsements is very blurry and confounds traditional regulations and delinations.
The proposed bill is well-intentioned and may limit exposure to promotional content, but it may also have unintended consequences. Take the existing version of COPPA, passed in 1998, which introduced a parental permission requirement for children under 13 to participate in commercial online platforms. Most open platforms responded by excluding those under 13, rather than take on the onerous parental permission process and challenges of serving children. This drove young people's participation underground on these sites, since they could easily misrepresent their age or use the account of a friend or caregiver. Research and everyday experience indicates that young people under 13 are all over YouTube and Facebook, and busy caregivers, including parents, are often complicit in letting this happen.
That doesn't mean, of course, that parents aren't concerned about the time their young people are spending on screens, and Google and Facebook have responded, respectively, with the kid-only "spaces" on YouTube and Messenger.
But these policy and tech solutions ignore the underlying reality that young people crave contact with bigger young people and grown-up expertise, and that mixed-age interaction is essential to their learning and development.
Not only is banning young people from open platforms an iffy, hard-to-enforce proposition, it's unclear whether it is even the best thing for them. It's possible that this new bill could damage the system like other well-intentioned efforts have in the past. I can't forget the overly stringent Computer Fraud and Abuse Act. Written a year after the movie War Games, the law made it a felony to break the terms of service of an online service, so that, say, an investigative journalist couldn't run a script to test on Facebook to make sure the algorithm was doing what they said it was. Regulating these technologies requires an interdisciplinary approach involving legal, policy, social, and technical experts working closely with industry, government, and consumers to get them to work the way we want them to.
Given the complexity of the issue, is the only way to protect young people to exclude them from the grown-up internet? Can algorithms be optimized for learning, high-quality content, and positive intergenerational communication for young people? What gets less attention rather than outright restriction is how we might optimize these platforms to provide joy, positive engagement, learning, and healthy communities for young people and families.
Children are exposed to risks at churches, schools, malls, parks, and anywhere adults and children interact. Even when harms and abuses happen, we don't talk about shutting down parks and churches, and we don't exclude young people from these intergenerational spaces. We also don't ask parents to evaluate the risks and give written permission every time their kid walks into an open commercial space like a mall or grocery store. We hold the leadership of these institutions accountable, pushing them to establish positive norms and punish abuse. As a society, we know the benefits of these institutions outweigh the harms.
Based on a massive EU-wide study of children online, communication researcher Sonia Livingstone argues that internet access should be considered a fundamental right of children. She notes that risks and opportunities go hand in hand: "The more often children use the internet, the more digital skills and literacies they generally gain, the more online opportunities they enjoy and—the tricky part for policymakers—the more risks they encounter." Shutting down children's access to open online resources often most harms vulnerable young people, such as those with special needs or those lacking financial resources. Consider, for example, the case of a home- and wheelchair-bound child whose parents only discovered his rich online gaming community and empowered online identity after his death. Or Autcraft, a Minecraft server community where young people with autism can foster friendships via a medium that often serves them better than face-to-face interactions.
As I was working on my last column about young people and screen time, I spent some time talking to my sister, Mimi Ito, who directs the Connected Learning Lab at UC Irvine. We discussed how these problems and the negative publicity around screens were causing caregivers to develop unhealthy relationships with their children while trying to regulate their exposure to screens and the content they delivered. The messages caregivers are getting about the need to regulate and monitor screen time are much louder than messages about how they can actively engage with young people's online interests. Mimi's recent book, Affinity Online: How Connection and Shared Interest Fuel Learning, features a range of mixed-age, online communities that demonstrate how young people can learn from other young people and adult experts online. Often it's the young people themselves that create communities, enforce norms, and insist on high-quality content. One of the cases, investigated by Rachel Cody Pfister, as her PhD work at the University of California, San Diego, is Hogwarts at Ravelry, a community of Harry Potter fans who knit together on Ravelry, an online platform for fiber arts. A 10-year-old girl founded the community, and members ranged from 11 to 70-plus at the time of Rachel's study.
Hogwarts at Ravelry is just one of a multitude of examples of free and open intergenerational online learning communities of different shapes and sizes. The MIT Media Lab, where I work, is home to Scratch, a project created in the Lifelong Kindergarten group. Millions of young people around the world are part of a safe, creative, and healthy space for creative coding. Some Reddit groups like /r/aww for cute animal content, or a range of subreddits on Pokemon Go, are lively spaces of intergenerational communication. Like with Scratch, these massive communities thrive because of strict content and community guidelines, algorithms optimized to support these norms, and dedicated human moderation.
YouTube is also an excellent source of content for learning and discovering new interests. One now famous 12-year-old learned to dubstep just by watching YouTube videos, for example. The challenge is squaring the incentives of free-for-all commercial platforms like YouTube with the needs of special populations like young people and intergenerational sub-communities with specific norms and standards. We need to recognize that young people will make contact with commercial content and grown-ups online, and we need to figure out better ways to regulate and optimize platforms to serve participants of mixed ages. This means bringing young people's interests, needs, and voices to the table, not shutting them out or making them invisible to online platforms and algorithms. This is why I've issued a call for research papers about algorithmic rights and protections for children together with my sister and our colleague and developmental psychologist, Candice Odgers. We hope to spark an interdisciplinary discussion of issues among a wide range of stakeholders to find answers to questions like: How can we create interfaces between the new, algorithmically governed platforms and their designers and civil society? How might we nudge YouTube and other platforms to be more like Scratch, designed for the benefit of young people and optimized not for engagement and revenue but instead for learning, exploration, and high-quality content? Can the internet support an ecosystem of platforms tailored to young people and mixed-age communities, where children can safely learn from each other, together with and from adults?
I know how important it is for young people to have connections to a world bigger and more diverse than their own. And I think that developers of these technologies (myself included) have a responsibility to design them based on scientific evidence and the participation of the public. We can't leave it to commercial entities to develop and guide today's learning platforms and internet communities—but we can't shut these platforms down or prevent children from having access to meaningful online relationships and knowledge, either.
Views: 27
When the Ridgecrest earthquake reached L.A. yesterday evening (no damage this far from the epicenter from that quake or the one the previous day) I was “in” a moving elevator under attack in the “Vader Immortal” Oculus Quest VR simulation. I didn’t realize that there was a quake at all, everything seemed part of the VR experience (haptic feedback in the hand controllers was already buzzing my arms at the time).
The only oddity was that I heard a strange clinking sound, that at the time had no obvious source but that I figured was somehow part of the simulation. Actually, it was probably the sound of ceiling fan knob chains above me hitting the glass light bulb fixtures as the fan was presumably swaying a bit.
Quakes of this sort are actually very easy to miss if you’re not sitting or standing quietly (I barely felt the one the previous day and wasn’t immediately sure that it was a quake), but I did find my experience last night to be rather amusing in retrospect.
By the way, “Vader Immortal” — and the Quest itself — are very, very cool, very much 21st century “sci-fi” tech finally realized. My thanks to Oculus for sending me a Quest for my experiments.
–Lauren–
Views: 19
So there’s yet another controversy surrounding YouTube and videos that include young children — this time concerns about YouTube suggesting such videos to “presumed” pedophiles.
We can argue about what YouTube should or should not be recommending to any given user. There are some calls for YT to not recommend such videos when it detects them (an imperfect process) — though I’m not convinced that this would really make much difference so long as the videos themselves are public.
But here’s a more fundamental question:
Why the hell are parents uploading videos of young children publicly to YouTube in the first place?
This is of course a subset of a more general issue — parents who apparently can’t resist posting all manner of photos and other personal information about their children in public online forums, much of which is going to be at the very least intensely embarrassing to those children when they’re older. And the Internet rarely ever forgets anything that was ever public (the protestations of EU politicians and regulators notwithstanding).
There are really only two major possibilities concerning such video uploads. Either the parents don’t care about these issues, or they don’t understand them. Or perhaps both.
Various display apps and web pages exist that will automatically display YT videos that have few or no current views from around the world. There’s an endless stream of these. Thousands. Millions? Typically these seem as if they have been automatically uploaded by various camera and video apps, possibly without any specific intentions for the uploading to occur. Many of these involve schools and children.
So a possible answer to my question above may be that many YT users — including parents of young children — are either not fully aware of what they are uploading, or do not realize that the uploads are public and are subject to being suggested to strangers or found by searching.
This leads us to another question. YT channel owners already have the ability to set their channel default privacy settings and the privacy settings for each individual video.
Currently those YT defaults are initially set to public.
Should YT’s defaults be private rather than public?
Looking at it from a user trust and safety standpoint, we may be approaching such a necessity, especially given the pressure for increased regulatory oversight from politicians and governments, which in my opinion is best avoided if at all possible.
These questions and their ramifications are complex to say the least.
Clearly, default channel and videos privacy would be the safest approach, ensuing that videos would typically only be shared to specific other users deemed suitable by the channel owner.
All of the public sharing capabilities of YT would still be present, but would require the owner to make specific decisions about the channel default and/or individual video settings. If a channel owner wanted to make some or all of their videos public — either to date or also going forward, that would be their choice. Full channel and individual videos privacy would only be the original defaults, purely as a safety measure.
Finer-grained settings might also be possible, not only including existing options like “unlisted” videos, but also specific options to control the visibility of videos and channels in search and suggestions.
Some of the complexities of such an approach are obvious. More controls means the potential for more user confusion. Fewer videos in search and suggestions limits visibility and could impact YT revenue streams to both Google and channel owners in complex ways that may be difficult to predict with significant accuracy.
But in the end, the last question here seems to be a relatively simple one. Should any YouTube uploaders ever have their videos publicly available for viewing, search, or suggestions if that was not actually their specific and informed intent?
I believe that the answer to that question is no.
Be seeing you.
–Lauren–
Resident Advisor suggested I write an article for them about the emerging online underground, and the result was my most comprehensive (and polemical) statement on the topic to date (click here to read). It's running theme was a comparison to late-C20th punk and indie, and it went into the aesthetics of vaporwave and PC Music too. The piece appeared alongside a (controversial but, I thought, pretty brilliant) podcast by #Feelings boss Ben Aqua. How many times has the concept of punk been redefined? Far too many to count, and besides, no one seems to want to label music any more. Even in the early '90s, barely 15 years into its life, the definition of punk had been broadened and warped in surprising directions—punk could mean naive pop, heavy metal in the charts, or even doing something yourself, whatever that might be. In a new music culture where guitars have been replaced by cracked copies of Ableton, bands have been replaced by anonymous individuals with SoundCloud accounts, and where rock as such hasn't really been on the underground agenda for years, what significance does punk still have?...
In each of these areas, the processes and problems of the online underground were those of the punk underground in the late 20th century. Building a musical culture on SoundCloud, Bandcamp and Facebook might seem new and strange (if only due to the technology involved) or—more negatively—unimportant or a sign of decline, but these paradigm shifts have happened to the underground before, and they hint at the opportunities and difficulties of the current situation....
Just like the classic punks, PC Music can be heard as dramatizing the decline of good taste at the hands of modernity, and in 2014 that means noble underground traditions like all that monochrome club/post-club music that rakes reverentially and melancholically through 30 years of analogue production all being displaced by digital decadence, rampant excess and fucking children. PC Music are trolling old ravers, the generation that built the hardcore continuum; they're trolling old punks and their insistence on realism. They're saying, "We might as well sound like this. In a world of gloss and accelerated desire, this is what society made us." And in this regard, they're punks... www.stewarthomesociety.org/blog
The main part of my website is here:
www.stewarthomesociety.org
And my censored YouTube profile is there:
http://www.youtube.com/stewarthome
YouTube actually removed a parody of a Fluxus film for violating their rules. This was a countdown from 10 to 1, no images in it at all, just numerals. Presumably the problem was the joke title 10 Erotic Movies - it had more than twenty thousand hits before being taken down by the authoritarians who run that platform. If YouTube won't allow a film like this, then Web 2.0 is a joke and we need to move on to Web 2.1, where we control the sites we're posting on!
However, you can listen to my punk rock slop and spoken word schlock, and even get free downloads, here:
http://www.last.fm/music/Stewart+Home
A few weeks ago I was asked to make some remarks at the MIT-Harvard Conference on the Uyghur Human Rights Crisis. When the student organizer of the conference, Zuly Mamat, asked me to speak at the event, I wasn't sure what I would say because I'm definitely not an expert on this topic. But as I dove into researching what is happening to the Uyghur community in China, I realized that it connected to a lot of the themes I have run up against in my own work, particularly the importance of considering the ethical and social implications of technology early on in the design and development process. The Uyghur human rights crisis demonstrates how the technology we build, even with the best of intentions, may be used to surveil and harm people. Many of my activities these days are focused on the prevention of misuse of technology in the future, but it requires more than just bolting ethicists onto product teams - I think it involves a fundamental shift in our priorities and a redesign of the relationship of the humanities and social sciences with engineering and science in academia and society. As a starting point, I think it is critically important to facilitate conversations about this problem through events like this one. You can view the video of the event here and read my edited remarks below.
Hello, I'm Joi Ito, the Director of the MIT Media Lab. I'm probably the least informed about this topic of everyone here, so first of all, I'm very grateful to all of the people who have been working on this topic and for helping me get more informed. I'm broadly interested in human rights, its relationship with technology and our role as Harvard and MIT and academia in general to intervene in these types of situations. So I want to talk mainly about that.
One of the things to think about not just in this case, but also more broadly, is the role of technology in surveillance and human rights. In the talks today, we've heard about some specific examples of how technology is being used to surveil the Uyghur community in China, but I thought I'd talk about it a little more generally. I specifically want to address the continuing investment in and ascension of the engineering and sciences in the world through ventures like MIT's new College of Computing, in terms of their influence and the scale at which they're being deployed. I believe that thinking about the ethical aspects of these investments is essential.
I remember when J.J. Abrams, one of our Director's Fellows and a film director for those of you who don't know, visited the Media Lab. We have 500 or so ongoing projects at the Media Lab and he asked some of the students, "Do you do anything that involves things like war or surveillance or things that you know, harm people?" And all of the students said, "No, of course we don't do that kind of thing. We make technology for good." And then he said, "Well let me re-frame that question, can you imagine an evil villain in any of my shows or movies using anything here to do really terrible things?" And everybody went, "Yeah!"
What's important to understand is that most engineers and scientists are developing tools to try to help the world, whether it's trying to model the brains of children in order to increase the quality and the effectiveness of education, or using sensors to help farmers grow crops. But what most people don't spend enough time thinking about is the dual use nature of the technology - the fact that technology can easily be used in ways that the designer did not intend.
Now, I think there are a lot of arguments about whose job it is to think about how technology can be used in unexpected and harmful ways. If I took the faculty in the Media Lab and put them on a line where at one end, the faculty believe we should think about all the social implications before doing anything, and at the other end they believe we should just build stuff and society will figure it out, I think there would be a fairly even distribution along the line. I would say that at MIT that's also roughly true. My argument is that we actually have to think more about the social implications of technology before designing it. It's very hard to un-design things, and I'm not saying that it's an easy task, and I'm not saying that we have to get everything perfect, but I think that having a more coherent view of the world and these implications is tremendously important.
The Media Lab is a little over 30 years old, and I've been there for 8 years, but I was very involved in the early days of the Internet. The other day, I was describing to Susan Silbey, the current faculty chair at MIT, how when we were building the Internet we thought if we could just provide a voice to everyone, if we could just connect everyone together, we would have world peace. I really believed that when we started, and I was expressing to Susan how naïve I feel now that the Internet has become something that's more akin to the little girl in the Exorcist, for those of you who have seen the movie. But Susan, being an anthropologist and historian said, "Well when you guys talked about connecting everybody together, we knew. The social scientists knew that it was going to be a mess."
One of the really important things I learned from my conversation with Susan was the extent to which the humanities have thought about and fought about a lot of these things. History has taught us a lot of these things. I know that it's somewhat taboo to invoke Nazi Germany in too many conversations, but if you look at the data that was collected in Europe to support social services, much of it was later used by the Nazis to roundup and persecute the Jews. And it's not exactly the same situation, but a lot of the databases that we're creating to help poor and disadvantaged families are also being used by the immigration services to find and target people for deportation.
Even the databases and technology that we use and create for the best of intentions can be subverted depending on who's in charge. So thinking about these systems is tremendously important. At MIT, we are, and I think that Zuly mentioned some of the specifics, working with tech companies that are working directly on surveillance technology or are in some way creating technologies that could be used for surveillance in China. Again thinking about the ethical issues is very important. I will point out that there are whole disciplines that work in this, STS, science technology in society, that's really what they do. They think about the impact of science and technology in society. They think about it in a historical context and provide us with a framework for thinking about these things. Thinking about how to integrate anthropology and STS into both the curriculum and the research at MIT is tremendously important.
The other thing to think about is allowing engineers more freedom to explore the application and impact of their work. One of the problems with scholarship is that many researchers don't have the freedom to fully test their hypotheses. For example, in January, Eric Topol tweeted about his paper that showed that of the 15 most impactful machine learning and medicine papers that had been published, none of them had been clinically validated. Many cases, in machine learning, you get some data, you tweak it and you get a very high effectiveness and then you walk away. Then the clinicians come in and they say "oh, but we can't replicate this, and we don't have the expertise" or "we tried it but it doesn't seem to work in practice." We're not providing, if you're following an academic path, the proper incentives for the computer scientists to integrate with and work closely with the clinicians in the field. One of the other challenges that we have is that our reward systems and the incentives that are in place don't encourage technologists to explore the social implications of the tech they produce. When this is the case, you fall a little bit short of actually getting to the question, "well, what does this actually mean?"
I co-teach a course at Harvard Law School called the Applied Challenges in Ethics and Governance of Artificial Intelligence, and through that class we've explored some research that considers the ethical and social impact of AI. To give you an example, one Media Lab project that we discussed was looking at risk scores used by the criminal justice system for sentencing and pre-trial assessments and bail. The project team initially thought "oh, we could just use a blockchain to verify the data and make the whole criminal sentencing system more efficient." But as the team started looking into it, they realized that the whole criminal justice system was somewhat broken. And as they started going deeper and deeper into the problem, they realized that while these prediction systems were making policing and judging possibly more efficient, they were also taking power away from the predictee and giving it to the predictor.
Basically, these automated systems were saying "okay, if you happen to live in this zip code, you will have a higher recidivism rate." But in reality, rearrest has more to do with policing and policy and the courts than it does with the criminality of the individual. By saying that this risk score can accurately predict how likely it is that this person will commit another crime, you're attributing the agency to the individual when actually much of the agency lies with the system. And by focusing on making the prediction tool more accurate, you end up ignoring existing weaknesses and biases in the overall justice system and the cause of those weaknesses. It's reminiscent of Caley Horan's writing on the history of insurance and redlining. She looks at the way in which insurance pricing, called actuarial fairness, became a legitimate way to use math to discriminate against people and how it took the debate away from the feminists and the civil rights leaders and made it an argument about the accuracy of algorithms.
The researchers who were trying to improve the criminal risk scoring system have completely pivoted to recommending that we stop using automated decision making in criminal justice. Instead they think we should use technology to look at the long term effects of policies in the criminal justice system and not to predict the criminality of individuals.
But this outcome is not common. I find that whether we're talking about tenure cases or publications or funding, we don't typically allow our researchers to end up in places that contradict the fundamental place where they started. So I think that's another thing that's really important. How do we create both research and curricular opportunities for people to explore their initial assumptions and hypotheses? As we think about this and this conversation, we should ask "how can we integrate this into our educational system?" Our academic process is really important and I love that we have scholars that are working on this, but how we bring this mentality to engineers and scientists is something that I'd love to think about and maybe in the Breakout Sessions we can work on that.
Now I want to pivot a little bit and talk about the role of academia in the Uyghur crisis. I know there are people who view this meeting as provocative or political and it reminds me of the March for Science that we had several years ago. I gave a talk at the first March for Science. Before the talk, when I was at a dinner table with a bunch of faculty (I won't name the faculty), someone said, "Why are you doing that? It's very political. We try not to be political, we're just scientists." And I said, "Well when it becomes political to tell the truth, when being supportive of climate science is political, when trying to support fundamental scientific research is political, then I'm political." So I don't want to be partisan, but I think if the truth is political, then I think we need to be political.
And this is not a new concept. If you look at the history of MIT, or just the history of academic freedom (there's the Statement of Principles on Academic Freedom and Tenure) you will find a bunch of interesting MIT history. In the late 40s and 50s, during the McCarthy period, society was going after communists and left wing people out of the fear of Communism. And many institutions were turning over their left wing Marxist academics, or firing them under pressure from the government. But MIT was quite good about protecting their Marxist affiliated faculty, and there's a very famous case that shows this. Dirk Struik, a math professor at MIT, was indicted by the Middlesex grand jury on charges of advocating the overthrow of the US and Massachusetts governments in 1951. At the time MIT suspended him with pay, but once the court abandoned the case due to lack of evidence and the fact that states shouldn't be ruling on this type of charge, MIT reinstated Professor Struik. This is a quote from the president at the time, James Killian about the incident.
"MIT believes that its faculty, as long as its members abide by the law, maintain the dignity and responsibility of their position, must be free to inquire, to challenge and to doubt in their search for what is true and good. They must be free to examine controversial matters, to reach conclusions of their own, to criticize and be criticized, and only through such unqualified freedom of thought and investigation can an educational institution, especially one dealing with science, perform its function of seeking truth."
Many of you may wonder why we have tenure at universities. We have tenure to protect our ability to question authority, to speak the truth and to really say what we think without fear of retribution.
There's another important case that demonstrates MIT's willingness to protect its faculty and students. In the early 1990s, MIT and a bunch of Ivy League schools came up with this idea to provide financial aid for low income students on a need basis. The Ivy League schools got together to coordinate on how they would assess need and how they would figure out how much financial aid to give to students. Weirdly, the United States government sued the Ivy League schools saying that this was an antitrust case, which was ridiculous because it was a charity. Most of the other universities caved in after this lawsuit, but Chuck Vest the president at the time said, "MIT has a long history of admitting students based on merit and a tradition of ensuring these students full financial aid." He refused to deny students financial aid, and a multi-year lawsuit ensued, in which eventually MIT won. And then this need-based scholarship system was enshrined in actual policy in the United States.
Many of the people who are here at MIT today probably don't remember this, but there's a great documentary film that shows MIT students and faculty literally clashing with police on these streets in an anti-Vietnam War protest 50 years ago. So in the not so distant past, MIT has been a very political place when it meant protecting our freedom to speak up.
More recently, I personally experienced this support for academic freedom. When Chelsea Manning's fellowship at the Harvard Kennedy School was rescinded, she emailed me and asked if she could speak at the Media Lab. I was thinking about it, and I asked the administration what they thought, and they thought it was a terrible idea. And when they told me that I said, "You know, now that means I have to invite her." I remember our Provost Marty saying, "I know." And that's what I think is wonderful about being here at MIT: the fact that the administration understands that faculty must be allowed to act independently of the Institute. Another example is when the administration was deciding what to do about funding from Saudi Arabia. The administration released a report, which has a few critics, that basically said, "we're going to let people decide what they want to do." I think each group or faculty member at MIT is permitted to make their own decision about whether to accept funding from Saudi Arabia. MIT, in my experience, has always stood by the academic freedom of whatever unit at the Institute that's trying to do what it wants to do.
I think we're in a very privileged place and I think that it's not only our freedom, but our obligation to speak up. It's also our responsibility to fight for the academic freedom of people in our community as well as people in other communities, and provide leadership. I really do want to thank the organizers of this conference for doing that. I think it's very bold, but I think it's very becoming of both MIT and Harvard. I read a very disturbing report from Human Rights Watch that talked about how Chinese scholars overseas are starting to have difficulties in speaking up, which I think is somewhat unprecedented because of the capabilities of today's technology. And I think there are similar reports about scholars from Saudi Arabia. The ability of these countries to surveil their citizens overseas and impinge on their academic freedom is a tremendously important topic to discuss, and think about both technically, legally and otherwise. I think it's also a very important thing for us to talk about how to protect the freedoms of students studying here.
Thank you again for making this topic now very front of mind for me. On the panel I'd love to try to describe some concrete steps that we can take to continue to protect this freedom that we have. Thank you.
Views: 14
Almost exactly two years ago, I noted here the comprehensive features that Google provides for users to access their Google-related activity data, and to control and/or delete it in a variety of ways. Please see:
The Google Page That Google Haters Don't Want You to Know About – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about
and:
Quick Tutorial: Deleting Your Data Using Google's “My Activity” – https://lauren.vortex.com/2017/04/24/quick-tutorial-deleting-your-data-using-googles-my-activity
Today Google announced a new feature that I’ve long been hoping for — the option to automatically delete these kinds of data after specific periods of time have elapsed (3 month and 18 month options). And of course, you still have the ability to use the longstanding manual features for control and deletion of such data whenever you desire, as described at the links mentioned above.
The new auto-delete feature will be deployed over coming weeks first to Location History and to Web & App Activity.
This is really quite excellent. It means that you can take advantage of the customization and other capabilities that are made possible by leaving data collection enabled, but if you’re concerned about longer term storage of that data, you’ll be able to activate auto-delete and really get the best of both worlds without needing to manually delete data yourself at intervals.
Auto-delete is a major privacy-positive milestone for Google, and is a model that other firms should follow.
My kudos to the Google teams involved!
–Lauren–
Views: 19
Could machine learning/AI techniques help to prevent mass shootings or other kinds of terrorist attacks? That’s the question. I do not profess to know the answer — but it’s a question that as a society we must seriously consider.
A notable relatively recent attribute of many mass attacks is that the criminal perpetrators don’t only want to kill, they want as large an audience as possible for their murderous activities, frequently planning their attacks openly on the Internet, even announcing online the initiation of their killing sprees and providing live video streams as well. Sometimes they use private forums for this purpose, but public forums seem to be even more popular in this context, given their potential for capturing larger audiences.
It’s particularly noteworthy that in some of these cases, members of the public were indeed aware of such attack planning and announcements due to those public postings, but chose not to report them. The reasons for the lack of reporting can be several. Users may be unsure whether or not the posts are serious, and don’t want to report someone for a fake attack scenario. Other users may want to report but not know where to report such a situation. And there may be other users who are actually urging the perpetrator onward to the maximum possible violence.
“Freedom of speech” and some privacy protections are generally viewed as ending where credible threats begin. Particularly in the context of public postings, this suggests that detecting these kinds of attacks before they have actually occurred may possibly be viewed as a kind of “big data” problem.
We can relatively easily list some of the factors that would need to be considered in these respects.
What level of resources would be required to keep an “automated” watch on at least the public postings and sites most likely to harbor the kinds of discussions and “attack manifestos” of concern? Could tools be developed to help separate false positive, faked, forged, or other “fantasy” attack postings from the genuine ones? How would these be tracked over time to include other sites involved in these operations, and to prevent “gaming” of the systems that might attempt to divert these tools away from genuine attack planning?
Clearly — as in many AI-related areas — automated systems alone would not be adequate by themselves to trigger full-scale alarms. These systems would primarily act as big filters, and would pass along to human teams their perceived alerts — with those teams making final determinations as to dispositions and possible referrals to law enforcement for investigatory or immediate preventative actions.
It can be reasonably argued that anyone publicly posting the kinds of specific planning materials that have been discovered in the wake of recent attacks has effectively surrendered various “rights” to privacy that might ordinarily be in force.
The fact that we keep discovering these kinds of directly related discussions and threats publicly online in the wake of these terrorist attacks, suggests that we are not effectively using the public information that is already available toward stopping these attacks before they actually occur.
To the extent that AI/machine learning technologies — in concert with human analysis and decision-making — may possibly provide a means to improve this situation, we should certainly at least be exploring the practical possibilities and associated issues.
–Lauren–
Applied Ethical and Governance Challenges in Artificial Intelligence (AI)
Part 3: Intervention
We recently completed the third and final section of our course that I co-taught with Jonathan Zittrain and TA'ed by Samantha Bates, John Bowers and Natalie Saltiel. The plan was to try to bring the discussion of diagnosis and prognosis in for a landing and figure out how to intervene.
The first class of this section (the eighth class of the course) looked at the use of algorithms in decision making. One paper that we read was the most recent in a series of papers by Jon Kleinberg, Sendhil Mullainathan and Cass Sunstein that supported the use of algorithms in decision making such as pretrial risk assessments - the particular paper we read focused on the use of algorithms for measuring the bias of the decision making. Sendhil Mullainathan, one of the authors of the paper joined us in the class. The second paper was by Rodrigo Ochigame, a history and science and technology in society (STS) student who criticized the fundamental premise of reducing notions such as "fairness" to "computationals formalisms" such as algorithms. The discussion which at points took the form of a lively debate was extremely interesting and helped us and the students see how important it is to question the framing of the questions and the assumptions that we often make when we begin working on a solution without coming to a societal agreement of the problem.
In the case of pretrial risk assessments, the basic question about whether rearrests are more of an indicator of policing practice or the "criminality of the individual" fundamentally changes whether the focus should be on the "fairness" and accuracy of the prediction of the criminality of the individual or whether we should be questioning the entire system of incarceration and its assumptions.
At the end of the class, Sendhil agreed to return to have a deeper and longer conversation with my Humanizing AI in Law (HAL) team to discuss this issue further.
In the next class, we discussed the history of causal inference and how statistics and correlation have dominated modern machine learning and data analysis. We discussed the difficulties and challenges in validating causal claims but also the importance of causal claims. In particular, we looked at how legal precedent has from time to time made references to the right to individualized sentencing. Clearly, risk scores used in sentencing that are protected by trade secrets and confidentiality agreements challenge the right to due process as expressed in the Wisconsin v. Loomis case as well as the right to an individualized sentence.
The last class focused on adversarial examples and technical debt - which helped us think about when and how policies and important "tests" and controls can and should be put in place vs when, if ever, we should just "move quickly and break things." I'm not sure if it was the consensus of the class, but I felt that somehow we needed a new design process that allowed for the creation of design stories and "tests" that could be developed by the users and members of the affected communities that were integrated into the development process - participant design that was deeply integrated into something that looked like agile development story and test development processes. Fairness and other contextual parameters are dynamic and can only be managed through interactions with the systems in which the algorithms are deployed. Figuring out a way to somehow integrate the dynamic nature of the social system seems like a possible approach for mitigating a category of technical debt and designing systems untethered from the normative environments in which they are deployed.
Throughout the course, I observed students learning from one another, rethinking their own assumptions, and collaborating on projects outside of class. We may not have figured out how to eliminate algorithmic bias or come up with a satisfactory definition of what makes an autonomous system interpretable, but we did find ourselves having conversations and coming to new points of view that I don't think would have happened otherwise.
It is clear that integrating humanities and social science into the conversation about law, economics and technology is required for us to navigate ourselves out of the mess that we've created and to chart a way forward into a our uncertain future with our increasingly algorithmic societal systems.
Syllabus Notes
By Samantha Bates
In our final stage of the course, the intervention stage, we investigated potential solutions to the problems we identified earlier in the course. Class discussions included consideration of the various tradeoffs of implementing potential solutions and places to intervene in different systems. We also investigated the balance between waiting to address potential weaknesses in a given system until after deployment versus proactively correcting deficiencies before deploying the autonomous system.
Class Session 8: Intervening on behalf of fairness
This class was structured as a conversation involving two guests, University of Chicago Booth School of Business Professor Sendhil Mullainathan and MIT PhD student Rodrigo Ochigame. As a class we debated whether elements of the two papers were reconcilable given their seemingly opposite viewpoints.
-
"Discrimination in the Age of Algorithms" by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein (February 2019).
-
[FORTHCOMING] "The Illusion of Algorithmic Fairness" by Rodrigo Ochigame (2019)
The main argument in "Discrimination in the Age of Algorithms" is that algorithms make it easier to identify and prevent discrimination. The authors point out that current obstacles to proving discrimination are primarily caused by opacity around human decision making. Human decision makers can make up justifications for their decisions after the fact or may be influenced by bias without even knowing it. The authors argue that by making algorithms transparent, primarily through the use of counterfactuals, we can determine which components of the algorithm are causing a biased outcome. The paper also suggests that we allow algorithms to consider personal attributes such as race and gender in certain contexts because doing so could help counteract human bias. For example, if managers consistently give higher performance ratings to male workers over female workers, the algorithm won't be able to figure out that managers are discriminating against women in the workplace if it can't incorporate data about gender. But if we allow the algorithm to be aware of gender when calculating work productivity, it may be able to uncover existing biases and prevent them from being perpetuated.
The second assigned reading, "The Illusion of Algorithmic Fairness," demonstrates that attempts to reduce elements of fairness to mathematical equations have persisted throughout history. Discussions about algorithmic fairness today mirror many of the same points of contention reached in past debates about fairness, such as whether we should optimize for utility or optimize for fair outcomes. Consequently, fairness debates today have inherited some assumptions from these past discussions. In particular, we "take many concepts for granted including probability, risk, classification, correlation, regression, optimization, and utility." The author argues that despite our technical advances, fairness remains "irreducible to a mathematical property of algorithms, independent from specific social contexts." He shows that any attempt at formalism will ultimately be influenced by the social and political climate of the time. Moreover, researchers frequently use misrepresentative, historical data to create "fair" algorithms. The way that the data is framed and interpreted can be misrepresentative and frequently reinforces existing discrimination (for example, predictive policing algorithms predict future policing, not future crime.)
These readings set the stage for a conversation about how we should approach developing interventions. While "Discrimination in the Age of Algorithms" makes a strong case for using algorithms (in conjunction with counterfactuals) to improve the status quo and make it easier to prove discrimination in court, "The Illusion of Algorithmic Fairness" cautions against trying to reduce components of fairness to mathematical properties. The "Illusion of Algorithmic Fairness" paper shows that this is not a new endeavor. Humans have tried to standardize the concept of fairness as early as 1700 and we have proved time and again that determining what is fair and what is unfair is much too complicated and context dependent to model in an algorithm.
Class Session 9: Intervening on behalf of interpretability
In our second to last class, we discussed causal inference, how it differs from correlative machine learning techniques, and its benefits and drawbacks. We then considered how causal models could be deployed in the criminal justice context to generate individualized sentences and what an algorithmically informed individualized sentence would look like.
-
The Book of Why by Judea Pearl and Dana Mackenzie, Basic Books (2018). Read Introduction.
-
State of Wisconsin v. Eric Loomis (2016), paragraphs 11-28 and 67-74.
The Book of Why describes the emerging field of causal inference, which attempts to model how the human brain works by considering cause and effect relationships. The introduction delves a little into the history of causal inference and explains that it took time for the field to develop because it was nearly impossible for scientists to communicate causal relationships using mathematical terms. We've now devised ways to model what the authors call "the do-operator" (which indicates that there was some action/form of intervention that makes the relationship causal rather than correlative) through diagrams, mathematical formulas and lists of assumptions.
One main point of the introduction and the book is that "data are dumb" because they don't explain why something happened. A key component of causal inference is the creation of counterfactuals to help us understand what would have happened had certain circumstances been different. The hope with causal inference is that it will be less impacted by bias because causal inference models do not look for correlations in data, but rather focus on the "do-operator." A causal inference approach may also make algorithms more interpretable because counterfactuals will offer a better way to understand how the AI makes decisions.
The other assigned reading, State of Wisconsin v. Eric Loomis, is a 2016 case about the use of risk assessment tools in the criminal justice system. In Loomis, the court used a risk assessment tool, COMPAS, to determine the defendant's risk of pretrial recidivism, general recidivism, and violent recidivism. The key question in this case was whether the judge should be able to consider the risk scores when determining a defendant's sentence. The State Supreme Court in this case decided that judges could consider the risk score because they also take into account other evidence when making sentencing decisions. For the purposes of this class, the case provided a lede into a discussion about the right to an individualized sentence and whether risk assessment scores can result in more fair outcomes for defendants. However, it turns out that risk assessment tools should not be employed if the goal is to produce individualized sentences. Despite their appearance of generating unique risk scores for defendants, risk assessment scores are not individualized as they compare information about an individual defendant to data about similar groups of offenders to determine that individual's recidivism risk.
Class Session 10: Intervening against adversarial examples and course conclusion
We opened our final class with a discussion about adversarial examples and technical debt before wrapping up the course with a final reflection on the broader themes and findings of the course.
-
"Hidden Technical Debt in Machine Learning Systems" by D. Sculley et al., NIPS (2015)
The term "technical debt" refers to the challenge of keeping machine learning systems up to date. While technical debt is a factor in any type of technical system, machine learning systems are particularly susceptible to collecting a lot of technical debt because they tend to involve many layers of infrastructure (code and non code). Technical debt also tends to accrue more in systems that are developed and deployed quickly. In a time crunch, it is more likely that new features will be added without deleting old ones and that the systems will not be checked for redundant features or unintended feedback loops before they are deployed. In order to combat technical debt, the authors suggest several approaches including, fostering a team culture that encourages simplifying systems and eliminating unnecessary features and creating an alert system that signals when a system has run up against pre-programmed limits and requires review.
During the course retrospective, students identified several overarching themes of the class including, the effectiveness and importance of interdisciplinary learning, the tendency of policymakers and industry leaders to emphasize short term outcomes over long term consequences of decisions, the challenge of teaching engineers to consider the ethical implications of their work during the development process, and the lack of input from diverse groups in system design and deployment.
More later...
Views: 13
Despite my very long history of enjoying “apocalyptic” and “technology run amok” sci-fi films, I’ve been forthright in my personal belief that AI and associated machine learning systems hold enormous promise for the betterment of our lives and our planet (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).
Of course there are definitely ways that we could screw this up. So deep discussion from a wide variety of viewpoints is critical to “accentuate the positive — eliminate the negative” (as the old Bing Crosby song lyrics suggest).
A time-tested model for firms needing to deal with these kinds of complex situations is the appointment of external interdisciplinary advisory panels.
Google announced its own such panel — the “Advanced Technology External Advisory Council” (ATEAC), last week.
Controversy immediately erupted both inside and outside of Google, particularly relating to the presence of prominent right-wing think tank Heritage Foundation president Kay Cole James. Another invited member — behavioral economist and privacy researcher Alessandro Acquisti — has now pulled out from ATEAC, apparently due to James’ presence on the panel and the resulting protests.
This is all extraordinarily worrisome.
While I abhor the sentiments of the Heritage Foundation, an AI advisory panel composed only of “yes men” in agreement more left-wing (and so admittedly my own) philosophies regarding social issues strikes me as vastly more dangerous.
Keeping in mind that advisory panels typically do not make policy — they only make recommendations — it is critical to have a wide range of input to these panels, including views with which we may personally strongly disagree, but that — like it or not — significant numbers of politicians and voters do enthusiastically agree with. The man sitting in the Oval Office right now is demonstrable proof that such views — however much we may despise them personally — are most definitely in the equation.
“Filter bubbles” are extraordinarily dangerous on both the right and left. One of the reasons why I so frequently speak on national talk radio — whose audiences are typically very much skewed to the right — is that I view this as an opportunity to speak truth (as I see it) regarding technology issues to listeners who are not often exposed to views like mine from the other commentators that they typically see and hear. And frequently, I afterwards receive emails saying “Thanks for explaining this like you did — I never heard it explained that way before” — making it all worthwhile as far as I’m concerned.
Not attempting to include a wide variety of viewpoints on a panel dealing with a subject as important as AI would not only give the appearance of “stacking the deck” to favor preconceived outcomes, but would in fact be doing exactly that, opening up the firms involved to attacks by haters and pandering politicians who would just love to impose draconian regulatory regimes for their own benefits.
The presence on an advisory panel of someone with whom other members may dramatically disagree does not imply endorsement of that individual.
I want to know what people who disagree with me are thinking. I want to hear from them. There’s an old saying: “Keep your friends close and your enemies closer.” Ignoring that adage is beyond foolish.
We can certainly argue regarding the specific current appointments to ATEAC, but viewing an advisory panel like this as some sort of rubber stamp for our preexisting opinions would be nothing less than mental malpractice.
AI is far too crucial to all of our futures for us to fall into that sort of intellectual trap.
–Lauren–
In the late 1970s, the computer, which for decades had been a mysterious, hulking machine that only did the bidding of corporate overlords, suddenly became something the average person could buy and take home. An enthusiastic minority saw how great this was and rushed to get a computer of their own. For many more people, the arrival of the microcomputer triggered helpless anxiety about the future. An ad from a magazine at the time promised that a home computer would "give your child an unfair advantage in school." It showed a boy in a smart blazer and tie eagerly raising his hand to answer a question, while behind him his dim-witted classmates look on sullenly. The ad and others like it implied that the world was changing quickly and, if you did not immediately learn how to use one of these intimidating new devices, you and your family would be left behind.
In the UK, this anxiety metastasized into concern at the highest levels of government about the competitiveness of the nation. The 1970s had been, on the whole, an underwhelming decade for Great Britain. Both inflation and unemployment had been high. Meanwhile, a series of strikes put London through blackout after blackout. A government report from 1979 fretted that a failure to keep up with trends in computing technology would "add another factor to our poor industrial performance."1 The country already seemed to be behind in the computing arena—all the great computer companies were American, while integrated circuits were being assembled in Japan and Taiwan.
In an audacious move, the BBC, a public service broadcaster funded by the government, decided that it would solve Britain's national competitiveness problems by helping Britons everywhere overcome their aversion to computers. It launched the Computer Literacy Project, a multi-pronged educational effort that involved several TV series, a few books, a network of support groups, and a specially built microcomputer known as the BBC Micro. The project was so successful that, by 1983, an editor for BYTE Magazine wrote, "compared to the US, proportionally more of Britain's population is interested in microcomputers."2 The editor marveled that there were more people at the Fifth Personal Computer World Show in the UK than had been to that year's West Coast Computer Faire. Over a sixth of Great Britain watched an episode in the first series produced for the Computer Literacy Project and 1.5 million BBC Micros were ultimately sold.3
An archive containing every TV series produced and all the materials published for the Computer Literacy Project was put on the web last year. I've had a huge amount of fun watching the TV series and trying to imagine what it would have been like to learn about computing in the early 1980s. But what's turned out to be more interesting is how computing was taught. Today, we still worry about technology leaving people behind. Wealthy tech entrepreneurs and governments spend lots of money trying to teach kids "to code." We have websites like Codecademy that make use of new technologies to teach coding interactively. One would assume that this approach is more effective than a goofy '80s TV series. But is it?
The Computer Literacy ProjectThe microcomputer revolution began in 1975 with the release of the Altair 8800. Only two years later, the Apple II, TRS-80, and Commodore PET had all been released. Sales of the new computers exploded. In 1978, the BBC explored the dramatic societal changes these new machines were sure to bring in a documentary called "Now the Chips Are Down."
The documentary was alarming. Within the first five minutes, the narrator explains that microelectronics will "totally revolutionize our way of life." As eerie synthesizer music plays, and green pulses of electricity dance around a magnified microprocessor on screen, the narrator argues that the new chips are why "Japan is abandoning its ship building, and why our children will grow up without jobs to go to." The documentary goes on to explore how robots are being used to automate car assembly and how the European watch industry has lost out to digital watch manufacturers in the United States. It castigates the British government for not doing more to prepare the country for a future of mass unemployment.
The documentary was supposedly shown to the British Cabinet.4 Several government agencies, including the Department of Industry and the Manpower Services Commission, became interested in trying to raise awareness about computers among the British public. The Manpower Services Commission provided funds for a team from the BBC's education division to travel to Japan, the United States, and other countries on a fact-finding trip. This research team produced a report that cataloged the ways in which microelectronics would indeed mean major changes for industrial manufacturing, labor relations, and office work. In late 1979, it was decided that the BBC should make a ten-part TV series that would help regular Britons "learn how to use and control computers and not feel dominated by them."5 The project eventually became a multimedia endeavor similar to the Adult Literacy Project, an earlier BBC undertaking that involved a TV series and supplemental courses and helped two million people improve their reading.
The producers behind the Computer Literacy Project were keen for the TV series to feature "hands-on" examples that viewers could try on their own if they had a microcomputer at home. These examples would have to be in BASIC, since that was the language (really the entire shell) used on almost all microcomputers. But the producers faced a thorny problem: Microcomputer manufacturers all had their own dialects of BASIC, so no matter which dialect they picked, they would inevitably alienate some large fraction of their audience. The only real solution was to create a new BASIC—BBC BASIC—and a microcomputer to go along with it. Members of the British public would be able to buy the new microcomputer and follow along without worrying about differences in software or hardware.
The TV producers and presenters at the BBC were not capable of building a microcomputer on their own. So they put together a specification for the computer they had in mind and invited British microcomputer companies to propose a new machine that met the requirements. The specification called for a relatively powerful computer because the BBC producers felt that the machine should be able to run real, useful applications. Technical consultants for the Computer Literacy Project also suggested that, if it had to be a BASIC dialect that was going to be taught to the entire nation, then it had better be a good one. (They may not have phrased it exactly that way, but I bet that's what they were thinking.) BBC BASIC would make up for some of BASIC's usual shortcomings by allowing for recursion and local variables.6
The BBC eventually decided that a Cambridge-based company called Acorn Computers would make the BBC Micro. In choosing Acorn, the BBC passed over a proposal from Clive Sinclair, who ran a company called Sinclair Research. Sinclair Research had brought mass-market microcomputing to the UK in 1980 with the Sinclair ZX80. Sinclair's new computer, the ZX81, was cheap but not powerful enough for the BBC's purposes. Acorn's new prototype computer, known internally as the Proton, would be more expensive but more powerful and expandable. The BBC was impressed. The Proton was never marketed or sold as the Proton because it was instead released in December 1981 as the BBC Micro, also affectionately called "The Beeb." You could get a 16k version for £235 and a 32k version for £335.
In 1980, Acorn was an underdog in the British computing industry. But the BBC Micro helped establish the company's legacy. Today, the world's most popular microprocessor instruction set is the ARM architecture. "ARM" now stands for "Advanced RISC Machine," but originally it stood for "Acorn RISC Machine." ARM Holdings, the company behind the architecture, was spun out from Acorn in 1990.
A bad picture of a BBC Micro, taken by me at the Computer History Museum
in Mountain View, California.
A dozen different TV series were eventually produced as part of the Computer Literacy Project, but the first of them was a ten-part series known as The Computer Programme. The series was broadcast over ten weeks at the beginning of 1982. A million people watched each week-night broadcast of the show; a quarter million watched the reruns on Sunday and Monday afternoon.
The show was hosted by two presenters, Chris Serle and Ian McNaught-Davis. Serle plays the neophyte while McNaught-Davis, who had professional experience programming mainframe computers, plays the expert. This was an inspired setup. It made for awkward transitions—Serle often goes directly from a conversation with McNaught-Davis to a bit of walk-and-talk narration delivered to the camera, and you can't help but wonder whether McNaught-Davis is still standing there out of frame or what. But it meant that Serle could voice the concerns that the audience would surely have. He can look intimidated by a screenful of BASIC and can ask questions like, "What do all these dollar signs mean?" At several points during the show, Serle and McNaught-Davis sit down in front of a computer and essentially pair program, with McNaught-Davis providing hints here and there while Serle tries to figure it out. It would have been much less relatable if the show had been presented by a single, all-knowing narrator.
The show also made an effort to demonstrate the many practical applications of computing in the lives of regular people. By the early 1980s, the home computer had already begun to be associated with young boys and video games. The producers behind The Computer Programme sought to avoid interviewing "impressively competent youngsters," as that was likely "to increase the anxieties of older viewers," a demographic that the show was trying to attract to computing.7 In the first episode of the series, Gill Nevill, the show's "on location" reporter, interviews a woman that has bought a Commodore PET to help manage her sweet shop. The woman (her name is Phyllis) looks to be 60-something years old, yet she has no trouble using the computer to do her accounting and has even started using her PET to do computer work for other businesses, which sounds like the beginning of a promising freelance career. Phyllis says that she wouldn't mind if the computer work grew to replace her sweet shop business since she enjoys the computer work more. This interview could instead have been an interview with a teenager about how he had modified Breakout to be faster and more challenging. But that would have been encouraging to almost nobody. On the other hand, if Phyllis, of all people, can use a computer, then surely you can too.
While the show features lots of BASIC programming, what it really wants to teach its audience is how computing works in general. The show explains these general principles with analogies. In the second episode, there is an extended discussion of the Jacquard loom, which accomplishes two things. First, it illustrates that computers are not based only on magical technology invented yesterday—some of the foundational principles of computing go back two hundred years and are about as simple as the idea that you can punch holes in card to control a weaving machine. Second, the interlacing of warp and weft threads is used to demonstrate how a binary choice (does the weft thread go above or below the warp thread?) is enough, when repeated over and over, to produce enormous variation. This segues, of course, into a discussion of how information can be stored using binary digits.
Later in the show there is a section about a steam organ that plays music encoded in a long, segmented roll of punched card. This time the analogy is used to explain subroutines in BASIC. Serle and McNaught-Davis lay out the whole roll of punched card on the floor in the studio, then point out the segments where it looks like a refrain is being repeated. McNaught-Davis explains that a subroutine is what you would get if you cut out those repeated segments of card and somehow added an instruction to go back to the original segment that played the refrain for the first time. This is a brilliant explanation and probably one that stuck around in people's minds for a long time afterward.
I've picked out only a few examples, but I think in general the show excels at demystifying computers by explaining the principles that computers rely on to function. The show could instead have focused on teaching BASIC, but it did not. This, it turns out, was very much a conscious choice. In a retrospective written in 1983, John Radcliffe, the executive producer of the Computer Literacy Project, wrote the following:
If computers were going to be as important as we believed, some genuine understanding of this new subject would be important for everyone, almost as important perhaps as the capacity to read and write. Early ideas, both here and in America, had concentrated on programming as the main route to computer literacy. However, as our thinking progressed, although we recognized the value of "hands-on" experience on personal micros, we began to place less emphasis on programming and more on wider understanding, on relating micros to larger machines, encouraging people to gain experience with a range of applications programs and high-level languages, and relating these to experience in the real world of industry and commerce…. Our belief was that once people had grasped these principles, at their simplest, they would be able to move further forward into the subject.
Later, Radcliffe writes, in a similar vein:
There had been much debate about the main explanatory thrust of the series. One school of thought had argued that it was particularly important for the programmes to give advice on the practical details of learning to use a micro. But we had concluded that if the series was to have any sustained educational value, it had to be a way into the real world of computing, through an explanation of computing principles. This would need to be achieved by a combination of studio demonstration on micros, explanation of principles by analogy, and illustration on film of real-life examples of practical applications. Not only micros, but mini computers and mainframes would be shown.
I love this, particularly the part about mini-computers and mainframes. The producers behind The Computer Programme aimed to help Britons get situated: Where had computing been, and where was it going? What can computers do now, and what might they do in the future? Learning some BASIC was part of answering those questions, but knowing BASIC alone was not seen as enough to make someone computer literate.
Computer Literacy TodayIf you google "learn to code," the first result you see is a link to Codecademy's website. If there is a modern equivalent to the Computer Literacy Project, something with the same reach and similar aims, then it is Codecademy.
"Learn to code" is Codecademy's tagline. I don't think I'm the first person to point this out—in fact, I probably read this somewhere and I'm now ripping it off—but there's something revealing about using the word "code" instead of "program." It suggests that the important thing you are learning is how to decode the code, how to look at a screen's worth of Python and not have your eyes glaze over. I can understand why to the average person this seems like the main hurdle to becoming a professional programmer. Professional programmers spend all day looking at computer monitors covered in gobbledygook, so, if I want to become a professional programmer, I better make sure I can decipher the gobbledygook. But dealing with syntax is not the most challenging part of being a programmer, and it quickly becomes almost irrelevant in the face of much bigger obstacles. Also, armed only with knowledge of a programming language's syntax, you may be able to read code but you won't be able to write code to solve a novel problem.
I recently went through Codecademy's "Code Foundations" course, which is the course that the site recommends you take if you are interested in programming (as opposed to web development or data science) and have never done any programming before. There are a few lessons in there about the history of computer science, but they are perfunctory and poorly researched. (Thank heavens for this noble internet vigilante, who pointed out a particularly egregious error.) The main focus of the course is teaching you about the common structural elements of programming languages: variables, functions, control flow, loops. In other words, the course focuses on what you would need to know to start seeing patterns in the gobbledygook.
To be fair to Codecademy, they offer other courses that look meatier. But even courses such as their "Computer Science Path" course focus almost exclusively on programming and concepts that can be represented in programs. One might argue that this is the whole point—Codecademy's main feature is that it gives you little interactive programming lessons with automated feedback. There also just isn't enough room to cover more because there is only so much you can stuff into somebody's brain in a little automated lesson. But the producers at the BBC tasked with kicking off the Computer Literacy Project also had this problem; they recognized that they were limited by their medium and that "the amount of learning that would take place as a result of the television programmes themselves would be limited."8 With similar constraints on the volume of information they could convey, they chose to emphasize general principles over learning BASIC. Couldn't Codecademy replace a lesson or two with an interactive visualization of a Jacquard loom weaving together warp and weft threads?
I'm banging the drum for "general principles" loudly now, so let me just explain what I think they are and why they are important. There's a book by J. Clark Scott about computers called But How Do It Know? The title comes from the anecdote that opens the book. A salesman is explaining to a group of people that a thermos can keep hot food hot and cold food cold. A member of the audience, astounded by this new invention, asks, "But how do it know?" The joke of course is that the thermos is not perceiving the temperature of the food and then making a decision—the thermos is just constructed so that cold food inevitably stays cold and hot food inevitably stays hot. People anthropomorphize computers in the same way, believing that computers are digital brains that somehow "choose" to do one thing or another based on the code they are fed. But learning a few things about how computers work, even at a rudimentary level, takes the homunculus out of the machine. That's why the Jacquard loom is such a good go-to illustration. It may at first seem like an incredible device. It reads punch cards and somehow "knows" to weave the right pattern! The reality is mundane: Each row of holes corresponds to a thread, and where there is a hole in that row the corresponding thread gets lifted. Understanding this may not help you do anything new with computers, but it will give you the confidence that you are not dealing with something magical. We should impart this sense of confidence to beginners as soon as we can.
Alas, it's possible that the real problem is that nobody wants to learn about the Jacquard loom. Judging by how Codecademy emphasizes the professional applications of what it teaches, many people probably start using Codecademy because they believe it will help them "level up" their careers. They believe, not unreasonably, that the primary challenge will be understanding the gobbledygook, so they want to "learn to code." And they want to do it as quickly as possible, in the hour or two they have each night between dinner and collapsing into bed. Codecademy, which after all is a business, gives these people what they are looking for—not some roundabout explanation involving a machine invented in the 18th century.
The Computer Literacy Project, on the other hand, is what a bunch of producers and civil servants at the BBC thought would be the best way to educate the nation about computing. I admit that it is a bit elitist to suggest we should laud this group of people for teaching the masses what they were incapable of seeking out on their own. But I can't help but think they got it right. Lots of people first learned about computing using a BBC Micro, and many of these people went on to become successful software developers or game designers. As I've written before, I suspect learning about computing at a time when computers were relatively simple was a huge advantage. But perhaps another advantage these people had is shows like The Computer Programme, which strove to teach not just programming but also how and why computers can run programs at all. After watching The Computer Programme, you may not understand all the gobbledygook on a computer screen, but you don't really need to because you know that, whatever the "code" looks like, the computer is always doing the same basic thing. After a course or two on Codecademy, you understand some flavors of gobbledygook, but to you a computer is just a magical machine that somehow turns gobbledygook into running software. That isn't computer literacy.
If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.
Previously on TwoBitHistory…
FINALLY some new damn content, amirite?
— TwoBitHistory (@TwoBitHistory) February 1, 2019
Wanted to write an article about how Simula bought us object-oriented programming. It did that, but early Simula also flirted with a different vision for how OOP would work. Wrote about that instead!https://t.co/AYIWRRceI6
-
Robert Albury and David Allen, Microelectronics, report (1979). ↩
-
Gregg Williams, "Microcomputing, British Style", Byte Magazine, 40, January 1983, accessed on March 31, 2019, https://archive.org/stream/byte-magazine-1983-01/1983_01_BYTE_08-01_Looking_Ahead#page/n41/mode/2up. ↩
-
John Radcliffe, "Toward Computer Literacy," Computer Literacy Project Achive, 42, accessed March 31, 2019, https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/media/Towards Computer Literacy.pdf. ↩
-
David Allen, "About the Computer Literacy Project," Computer Literacy Project Archive, accessed March 31, 2019, https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/history. ↩
-
ibid. ↩
-
Williams, 51. ↩
-
Radcliffe, 11. ↩
-
Radcliffe, 5. ↩
SPECIAL OFFER: Pre-order NOW and receive the copy you ordered when it comes out.http://bit.ly/scarfolkbook

For more information please reread.
Imagine that you are sitting on the grassy bank of a river. Ahead of you, the water flows past swiftly. The afternoon sun has put you in an idle, philosophical mood, and you begin to wonder whether the river in front of you really exists at all. Sure, large volumes of water are going by only a few feet away. But what is this thing that you are calling a "river"? After all, the water you see is here and then gone, to be replaced only by more and different water. It doesn't seem like the word "river" refers to any fixed thing in front of you at all.
In 2009, Rich Hickey, the creator of Clojure, gave an excellent talk about why this philosophical quandary poses a problem for the object-oriented programming paradigm. He argues that we think of an object in a computer program the same way we think of a river—we imagine that the object has a fixed identity, even though many or all of the object's properties will change over time. Doing this is a mistake, because we have no way of distinguishing between an object instance in one state and the same object instance in another state. We have no explicit notion of time in our programs. We just breezily use the same name everywhere and hope that the object is in the state we expect it to be in when we reference it. Inevitably, we write bugs.
The solution, Hickey concludes, is that we ought to model the world not as a collection of mutable objects but a collection of processes acting on immutable data. We should think of each object as a "river" of causally related states. In sum, you should use a functional language like Clojure.
The author, on a hike, pondering the ontological commitments
of
object-oriented programming.
Since Hickey gave his talk in 2009, interest in functional programming languages has grown, and functional programming idioms have found their way into the most popular object-oriented languages. Even so, most programmers continue to instantiate objects and mutate them in place every day. And they have been doing it for so long that it is hard to imagine that programming could ever look different.
I wanted to write an article about Simula and imagined that it would mostly be about when and how object-oriented constructs we are familiar with today were added to the language. But I think the more interesting story is about how Simula was originally so unlike modern object-oriented programming languages. This shouldn't be a surprise, because the object-oriented paradigm we know now did not spring into existence fully formed. There were two major versions of Simula: Simula I and Simula 67. Simula 67 brought the world classes, class hierarchies, and virtual methods. But Simula I was a first draft that experimented with other ideas about how data and procedures could be bundled together. The Simula I model is not a functional model like the one Hickey proposes, but it does focus on processes that unfold over time rather than objects with hidden state that interact with each other. Had Simula 67 stuck with more of Simula I's ideas, the object-oriented paradigm we know today might have looked very different indeed—and that contingency should teach us to be wary of assuming that the current paradigm will dominate forever.
Simula 0 Through 67Simula was created by two Norwegians, Kristen Nygaard and Ole-Johan Dahl.
In the late 1950s, Nygaard was employed by the Norwegian Defense Research Establishment (NDRE), a research institute affiliated with the Norwegian military. While there, he developed Monte Carlo simulations used for nuclear reactor design and operations research. These simulations were at first done by hand and then eventually programmed and run on a Ferranti Mercury.1 Nygaard soon found that he wanted a higher-level way to describe these simulations to a computer.
The kind of simulation that Nygaard commonly developed is known as a "discrete event model." The simulation captures how a sequence of events change the state of a system over time—but the important property here is that the simulation can jump from one event to the next, since the events are discrete and nothing changes in the system between events. This kind of modeling, according to a paper that Nygaard and Dahl presented about Simula in 1966, was increasingly being used to analyze "nerve networks, communication systems, traffic flow, production systems, administrative systems, social systems, etc."2 So Nygaard thought that other people might want a higher-level way to describe these simulations too. He began looking for someone that could help him implement what he called his "Simulation Language" or "Monte Carlo Compiler."3
Dahl, who had also been employed by NDRE, where he had worked on language design, came aboard at this point to play Wozniak to Nygaard's Jobs. Over the next year or so, Nygaard and Dahl worked to develop what has been called "Simula 0."4 This early version of the language was going to be merely a modest extension to ALGOL 60, and the plan was to implement it as a preprocessor. The language was then much less abstract than what came later. The primary language constructs were "stations" and "customers." These could be used to model certain discrete event networks; Nygaard and Dahl give an example simulating airport departures.5 But Nygaard and Dahl eventually came up with a more general language construct that could represent both "stations" and "customers" and also model a wider range of simulations. This was the first of two major generalizations that took Simula from being an application-specific ALGOL package to a general-purpose programming language.
In Simula I, there were no "stations" or "customers," but these could be recreated using "processes." A process was a bundle of data attributes associated with a single action known as the process' operating rule. You might think of a process as an object with only a single method, called something like run(). This analogy is imperfect though, because each process' operating rule could be suspended or resumed at any time—the operating rules were a kind of coroutine. A Simula I program would model a system as a set of processes that conceptually all ran in parallel. Only one process could actually be "current" at any time, but once a process suspended itself the next queued process would automatically take over. As the simulation ran, behind the scenes, Simula would keep a timeline of "event notices" that tracked when each process should be resumed. In order to resume a suspended process, Simula needed to keep track of multiple call stacks. This meant that Simula could no longer be an ALGOL preprocessor, because ALGOL had only once call stack. Nygaard and Dahl were committed to writing their own compiler.
In their paper introducing this system, Nygaard and Dahl illustrate its use by implementing a simulation of a factory with a limited number of machines that can serve orders.6 The process here is the order, which starts by looking for an available machine, suspends itself to wait for one if none are available, and then runs to completion once a free machine is found. There is a definition of the order process that is then used to instantiate several different order instances, but no methods are ever called on these instances. The main part of the program just creates the processes and sets them running.
The first Simula I compiler was finished in 1965. The language grew popular at the Norwegian Computer Center, where Nygaard and Dahl had gone to work after leaving NDRE. Implementations of Simula I were made available to UNIVAC users and to Burroughs B5500 users.7 Nygaard and Dahl did a consulting deal with a Swedish company called ASEA that involved using Simula to run job shop simulations. But Nygaard and Dahl soon realized that Simula could be used to write programs that had nothing to do with simulation at all.
Stein Krogdahl, a professor at the University of Oslo that has written about the history of Simula, claims that "the spark that really made the development of a new general-purpose language take off" was a paper called "Record Handling" by the British computer scientist C.A.R. Hoare.8 If you read Hoare's paper now, this is easy to believe. I'm surprised that you don't hear Hoare's name more often when people talk about the history of object-oriented languages. Consider this excerpt from his paper:
The proposal envisages the existence inside the computer during the execution of the program, of an arbitrary number of records, each of which represents some object which is of past, present or future interest to the programmer. The program keeps dynamic control of the number of records in existence, and can create new records or destroy existing ones in accordance with the requirements of the task in hand.
Each record in the computer must belong to one of a limited number of disjoint record classes; the programmer may declare as many record classes as he requires, and he associates with each class an identifier to name it. A record class name may be thought of as a common generic term like "cow," "table," or "house" and the records which belong to these classes represent the individual cows, tables, and houses.
Hoare does not mention subclasses in this particular paper, but Dahl credits him with introducing Nygaard and himself to the concept.9 Nygaard and Dahl had noticed that processes in Simula I often had common elements. Using a superclass to implement those common elements would be convenient. This also raised the possibility that the "process" idea itself could be implemented as a superclass, meaning that not every class had to be a process with a single operating rule. This then was the second great generalization that would make Simula 67 a truly general-purpose programming language. It was such a shift of focus that Nygaard and Dahl briefly considered changing the name of the language so that people would know it was not just for simulations.10 But "Simula" was too much of an established name for them to risk it.
In 1967, Nygaard and Dahl signed a contract with Control Data to implement this new version of Simula, to be known as Simula 67. A conference was held in June, where people from Control Data, the University of Oslo, and the Norwegian Computing Center met with Nygaard and Dahl to establish a specification for this new language. This conference eventually led to a document called the "Simula 67 Common Base Language," which defined the language going forward.
Several different vendors would make Simula 67 compilers. The Association of Simula Users (ASU) was founded and began holding annual conferences. Simula 67 soon had users in more than 23 different countries.11
21st Century SimulaSimula is remembered now because of its influence on the languages that have supplanted it. You would be hard-pressed to find anyone still using Simula to write application programs. But that doesn't mean that Simula is an entirely dead language. You can still compile and run Simula programs on your computer today, thanks to GNU cim.
The cim compiler implements the Simula standard as it was after a revision in 1986. But this is mostly the Simula 67 version of the language. You can write classes, subclass, and virtual methods just as you would have with Simula 67. So you could create a small object-oriented program that looks a lot like something you could easily write in Python or Ruby:
! dogs.sim ;
Begin
Class Dog;
! The cim compiler requires virtual procedures to be fully specified ;
Virtual: Procedure bark Is Procedure bark;;
Begin
Procedure bark;
Begin
OutText("Woof!");
OutImage; ! Outputs a newline ;
End;
End;
Dog Class Chihuahua; ! Chihuahua is "prefixed" by Dog ;
Begin
Procedure bark;
Begin
OutText("Yap yap yap yap yap yap");
OutImage;
End;
End;
Ref (Dog) d;
d :- new Chihuahua; ! :- is the reference assignment operator ;
d.bark;
End;
You would compile and run it as follows:
$ cim dogs.sim Compiling dogs.sim: gcc -g -O2 -c dogs.c gcc -g -O2 -o dogs dogs.o -L/usr/local/lib -lcim $ ./dogs Yap yap yap yap yap yap
(You might notice that cim compiles Simula to C, then hands off to a C compiler.)
This was what object-oriented programming looked like in 1967, and I hope you agree that aside from syntactic differences this is also what object-oriented programming looks like in 2019. So you can see why Simula is considered a historically important language.
But I'm more interested in showing you the process model that was central to Simula I. That process model is still available in Simula 67, but only when you use the Process class and a special Simulation block.
In order to show you how processes work, I've decided to simulate the following scenario. Imagine that there is a village full of villagers next to a river. The river has lots of fish, but between them the villagers only have one fishing rod. The villagers, who have voracious appetites, get hungry every 60 minutes or so. When they get hungry, they have to use the fishing rod to catch a fish. If a villager cannot use the fishing rod because another villager is waiting for it, then the villager queues up to use the fishing rod. If a villager has to wait more than five minutes to catch a fish, then the villager loses health. If a villager loses too much health, then that villager has starved to death.
This is a somewhat strange example and I'm not sure why this is what first came to mind. But there you go. We will represent our villagers as Simula processes and see what happens over a day's worth of simulated time in a village with four villagers.
The full program is available here as a Gist.
The last lines of my output look like the following. Here we are seeing what happens in the last few hours of the day:
1299.45: John is hungry and requests the fishing rod. 1299.45: John is now fishing. 1311.39: John has caught a fish. 1328.96: Betty is hungry and requests the fishing rod. 1328.96: Betty is now fishing. 1331.25: Jane is hungry and requests the fishing rod. 1340.44: Betty has caught a fish. 1340.44: Jane went hungry waiting for the rod. 1340.44: Jane starved to death waiting for the rod. 1369.21: John is hungry and requests the fishing rod. 1369.21: John is now fishing. 1379.33: John has caught a fish. 1409.59: Betty is hungry and requests the fishing rod. 1409.59: Betty is now fishing. 1419.98: Betty has caught a fish. 1427.53: John is hungry and requests the fishing rod. 1427.53: John is now fishing. 1437.52: John has caught a fish.
Poor Jane starved to death. But she lasted longer than Sam, who didn't even make it to 7am. Betty and John sure have it good now that only two of them need the fishing rod.
What I want you to see here is that the main, top-level part of the program does nothing but create the four villager processes and get them going. The processes manipulate the fishing rod object in the same way that we would manipulate an object today. But the main part of the program does not call any methods or modify and properties on the processes. The processes have internal state, but this internal state only gets modified by the process itself.
There are still fields that get mutated in place here, so this style of programming does not directly address the problems that pure functional programming would solve. But as Krogdahl observes, "this mechanism invites the programmer of a simulation to model the underlying system as a set of processes, each describing some natural sequence of events in that system."12 Rather than thinking primarily in terms of nouns or actors—objects that do things to other objects—here we are thinking of ongoing processes. The benefit is that we can hand overall control of our program off to Simula's event notice system, which Krogdahl calls a "time manager." So even though we are still mutating processes in place, no process makes any assumptions about the state of another process. Each process interacts with other processes only indirectly.
It's not obvious how this pattern could be used to build, say, a compiler or an HTTP server. (On the other hand, if you've ever programmed games in the Unity game engine, this should look familiar.) I also admit that even though we have a "time manager" now, this may not have been exactly what Hickey meant when he said that we need an explicit notion of time in our programs. (I think he'd want something like the superscript notation that Ada Lovelace used to distinguish between the different values a variable assumes through time.) All the same, I think it's really interesting that right there at the beginning of object-oriented programming we can find a style of programming that is not all like the object-oriented programming we are used to. We might take it for granted that object-oriented programming simply works one way—that a program is just a long list of the things that certain objects do to other objects in the exact order that they do them. Simula I's process system shows that there are other approaches. Functional languages are probably a better thought-out alternative, but Simula I reminds us that the very notion of alternatives to modern object-oriented programming should come as no surprise.
If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.
Previously on TwoBitHistory…
Hey everyone! I sadly haven't had time to do any new writing but I've just put up an updated version of my history of RSS. This version incorporates interviews I've since done with some of the key people behind RSS like Ramanathan Guha and Dan Libby.https://t.co/WYPhvpTGqB
— TwoBitHistory (@TwoBitHistory) December 18, 2018
-
Jan Rune Holmevik, "The History of Simula," accessed January 31, 2019, http://campus.hesge.ch/daehne/2004-2005/langages/simula.htm. ↩
-
Ole-Johan Dahl and Kristen Nygaard, "SIMULA—An ALGOL-Based Simulation Langauge," Communications of the ACM 9, no. 9 (September 1966): 671, accessed January 31, 2019, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.384&rep=rep1&type=pdf. ↩
-
Stein Krogdahl, "The Birth of Simula," 2, accessed January 31, 2019, http://heim.ifi.uio.no/~steinkr/papers/HiNC1-webversion-simula.pdf. ↩
-
ibid. ↩
-
Ole-Johan Dahl and Kristen Nygaard, "The Development of the Simula Languages," ACM SIGPLAN Notices 13, no. 8 (August 1978): 248, accessed January 31, 2019, https://hannemyr.com/cache/knojd_acm78.pdf. ↩
-
Dahl and Nygaard (1966), 676. ↩
-
Dahl and Nygaard (1978), 257. ↩
-
Krogdahl, 3. ↩
-
Ole-Johan Dahl, "The Birth of Object-Orientation: The Simula Languages," 3, accessed January 31, 2019, http://www.olejohandahl.info/old/birth-of-oo.pdf. ↩
-
Dahl and Nygaard (1978), 265. ↩
-
Holmevik. ↩
-
Krogdahl, 4. ↩

My favorite thrill is pinning the throttle after the apex of T3 at HPR, especially with an untested bike. My second favorite thrill is getting an idea out of my head and turning it into a design before making it real, loading it into the van, and taking it to the track. The next racer is into that "Second favorite" stage. The details are a lot different than the first racer, with a few exceptions. The shocks will end up in the same area, and the upper A-arms and single sided steered upright will return at both ends (With vastly lighter and better looking design and fabrication). I'm really excited about the design of the lower suspension arm - it is VERY exotic and unique (One arm for both wheels) - it solves all of my prior 2WS/2WD design nuisances and headaches.

Another good thrill is having a photo pass for MRA events and shooting video. While nothing beats the view of the race from a race bike, being able to wander around the track and scout out vantage points for capturing the action is about as close as anyone else can get. A lot of my footage made it into the MRA's Award Banquet video - thanks for the credit!

Getting the shop together and tooled up is still in progress. All of the lighting and outlets were removed and replaced. A heater should be installed very soon. Real machine tools have been moved in. As always, good help is necessary for rigging heavy machines!

There's a LOT of work in the shop to be done in 2019. And MRA racing action to shoot at HPR with a new camera this upcoming season well.
T3 awaits...
Top video: Jeremy Alexander
Originally shared by Shava Nerad
David Sirota's war against media critical thinking
Bernie, Beto, and the Streisand Effect
If there's anything I despise more than an attack on the electorate from foreign influence, from right wing media, from corporate mainstream media, it's a left media figure using everything we know about propaganda and media criticism, distortion and influence, to punch left.
David Sirota was Bernie Sanders' first office lead in the 90s, when Vermont sent Bernie to DC to mess with Speaker Newt Gingrich's head. It was David's first big break in DC from the looks of his VC, and I'm sure that the relationship means a lot.
Right up to New Hampshire, I was pretty gleeful about Sanders' run. I was really troubled when he hired Tad Devine and displaced his Vermont staff. I defended Bernie with teeth bared when his staff lifted the Clinton campaign annotated voter file (yes, in the modern way of blurring social engineering and hacking, you can call that hacking) and the DNC threatened -- quite justifiably -- to shut their asses down.
Later I ended up regretting it as the campaign grew more and more anti-community. I imagined another Dean campaign -- bottom up, participatory, integrated with the party to the point of taking over the counties, breathing a via positiva of lifeblood into the progressives -- to use an abused term? Hope.
What we got was Bernie Bros that presaged #metoo politics, and a level of hostility and lack of civic and political understanding of how political insurgencies work in a two party system that was crippling, all around.
Well, oops, it's happening again.
This time, instead of Tad, we've got David Sirota as our snake in the garden, the designated whisperer of insinuations to drip poison into ears and divide.
He's fun. Let me take this apart for you. I'm going to write this up as a reference for fellow journalists. It's going to be tl;dr, long, opinionated but well documented, and I'm going to add to it over the course of days.
===
Who the hell is Beto O'Rourke?
I'd heard the name. There's even some lunatic with Mass plates on my street here in Cambridge who has a Beto bumper sticker. Early adopter, I guess.
But as I've written here I'm not favoring anyone at this point in 2020. It's too early.
Still, the first week in December, I saw retweets of Sirota "exposing" Beto for various insinuated sins against progressive politics. The major charges have been that he has:
o - voted "with the GOP" 167 times.
o - accepted at least one maxed out donation from a CEO of an oil/gas corporation
Now, I'm going to go through and take these apart in depth with full footnotes, but this preamble is just to explain why this rang such an off note with me.
Voting "with the GOP" means you are not voting party line Democrat. There are lots of reasons for this. One of the most common in recent years is that you live in a rural state. Yo? This is part of how we got the Cheeto.
Plus, Politico has reported that Sanders votes with the Dems about 95% of the time. He has been in office a very long time. I don't have a full tally, but David has included procedural votes in his 167 that Beto's joined the evil pachyderms. How many hundreds or thousands more votes has our independent from Vermont registered since the 90s?
David illustrated Beto's receiving "oil money" from the CEO of a small business in Texas. Right SIC code, $2700 donation. Instructed people to decide what they thought of it -- after framing that we can't afford more money in politics supporting global warming that is going to kill us all. Nice.
Remember, this stuff pretty much starts with Stalin, and he was a lefty. We've all studied him. ;)
The example he uses is a guy who is a long time Democratic donor, the widower of a Human Rights Campaign activist. The two men were married in the Unitarian Universalist church. Now he's raising two kids as a single dad.
I honestly doubt he was buying Beto's vote for big oil.
Beto's a Texas politician. Over 375,000 people in Texas fall directly under the oil/gas SIC code, and more -- likely millions -- in the many industries that support and profit from the extraction and refining.
What is Bernie afraid of?
I'm not the only one -- probably not even the only one who didn't know crap about Beto -- for whom David Sirota is managing to bring a spotlight to the Texan and shade to the Vermonter with his tactics.
Streisand Effect.
https://nymclub.net/photos/mosqueeto/album/50d4bb8cdc76f81c887b296fdbabc64d6a203552bc8122e7805e6326fb5e7ae0
https://www.theonion.com/thousands-of-drunk-revelers-dressed-as-jesus-descend-on-1831048100

Best viewed large.
This is the scariest article I've read in a really long, scary, time.
"(It's easy to read that number as 60 percent less, but it's sixtyfold less: Where once he caught 473 milligrams of bugs, Lister was now catching just eight milligrams.) "It was, you know, devastating," Lister told me. But even scarier were the ways the losses were already moving through the ecosystem, with serious declines in the numbers of lizards, birds and frogs. The paper reported "a bottom-up trophic cascade and consequent collapse of the forest food web." Lister's inbox quickly filled with messages from other scientists, especially people who study soil invertebrates, telling him they were seeing similarly frightening declines. Even after his dire findings, Lister found the losses shocking: "I didn't even know about the earthworm crisis!"
https://www.nytimes.com/2018/11/27/magazine/insect-apocalypse.html

A bug.

Fuck Bernie Sanders, his momma, and the white horse he rode his racist ass in on. Those white voters don't exist.
https://www.thedailybeast.com/bernie-sanders-on-andrew-gillium-and-stacey-abrams-many-whites-uncomfortable-voting-for-black-candidates

Originally shared by Darrin C
There's currently an organized bot effort to discourage people from voting. Among their "arguments" is that voting machines are all "rigged".
Hogwash.
I'm one of the computer scientists who discovered these voting system problems. And I vote, because it matters. So should you.

Purgatory.
How I set up a hubzilla hub on Digital Ocean.
Motivation: Hubzilla is the most interesting of the possible G+ alternatives I've looked at so far. It's most important feature, from my perspective, is that your identity isn't tied to a particular hub. Identity portability is built in. You can move your activity to any hubzilla hub -- one you run yourself, or one run by a mega-corp. You can run your own hub and participate fully in the network.
But hubzilla is new and perhaps a bit hard to grasp -- new terms, new concepts. I decided to set up a hub.
Requirements: Some proficiency in linux system administration at the command line. Financial committment of ~ $100/year. Time committment of ~ 8-24 hours to set up, and then ongoing time TBD. Some expertise in using Google...
0) You need a domain name for your hub (eg: "nymclub.net"). In my case I had registered the name long ago at godaddy.com.
1) Set up a DO account (at https://www.digitalocean.com/) and create a
droplet. A minimal droplet costs $5/month. Select ubuntu 18.04 as the OS.
The name of the droplet on creation should be the domain name above (eg: "nymclub.net"), and not the name they provide by default. Secure your droplet by following the excellent clear instructions at
https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04
[In general, DO has really great tutorials.]
[Note: I don't work for DO, but I am a satisfied customer.]
The droplet will have an IP address that it will keep as long as it is alive.
Note it.
2) Set up DNS using the above IP address. You can use DO servers, following the documentation at:
https://www.digitalocean.com/docs/networking/dns/
In my case I have my own dns servers, so I used them.
Be sure you can log into your droplet remotely, by name, not IP address.
3) Install the LAMP stack (Linux, Apache, Mysql, PHP):
https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-ubuntu-18-04
(There is also a DO "one click application" that gives you a server with a
LAMP stack. I don't know anything about it, other than it exists.)
Note: I used the MariaDB, a free plug in replacement for MySql, with
$ sudo apt install mariadb-server
if I recall correctly. It may be the default already.
Verify that you can see the apache start page from your browser.
4) Hubzilla requires a mail server that can send mail to confirm accounts. I just installed postfix as an "internet server":
$ sudo apt update
$ sudo apt install postfix
[I think this will take further work on my part -- I got it to the point where it could send the confirmation emails, and didn't do any more email
configuration. Email is, in general, a pain.]
I also installed an email client so I could send test messages:
$ sudo apt install mutt
5) Hubzilla also highly recommends using SSL/TLS for your web server. I used the "Let's Encrypt" certificate authority, and "certbot". See
https://certbot.eff.org/lets-encrypt/ubuntubionic-apache for information.
https://nymclub.net worked first try.
6) Install hubzilla. The instructions at
https://project.hubzilla.org/help/en/admin/administrator_guide are perhaps too concise, but they are complete. I followed them slavishly. Google will reveal several other tutorials:
https://www.howtoforge.com/tutorial/how-to-install-hubzilla-on-ubuntu/ --
doesn't include TLS, and uses an apache virtual host.
https://hubzilla.rocks/page/tobias/tutorial_install_hubzilla_in_7_easy_steps
https://websiteforstudents.com/install-hubzilla-platform-on-ubuntu-16-04-18-04-with-apache2-mariadb-and-php-7-2/
Might not be a bad idea to read through them.
Once you have unpacked the software you can access the site and use the software itself to guide you through the installation. Specifically, I used the recommended "git clone" to get hubzilla in the default root directory of the web site, /var/www/html, then browsed to https://nymclub.net. The web site at this point shows the status of the installation -- what is missing and what needs to be configured. At the command line, then, I manually installed the needed requirements -- php-zip, mbstring, php-xml, and several others were needed. Sometimes it took a bit of head-scratching to figure out the correct package name to install. You can use "dpkg
$ sudo dpkg -S php-zip
You may also need to edit the /etc/php/7.2/apache2/php.ini file to be sure that all the indicated packages have been enabled, change upload limits, and so on.
Important: in order to get changes in the php configuration to be reflected in the web page you must first RESTART THE WEB SERVER:
$ sudo service apache2 restart
Took me a while to remember that...
Be sure you have changed "AllowOverride None" to "AllowOverride All" in the necessary places in the /etc/apache2/apache2.conf file:
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
Set up the database and the database user as described in the documentation.
After all this you should have an installation.
Finally, you need to create the first user, using the same email address
that you provided during installation. This user is the administrator.
7) Now it is "administration" of hubzilla, not "installation." I'm learning
this at the moment, so just a couple of quick notes.
Initially, the navigation bar at the top of the page is pretty empty. In the
upper right of the "Channel Home" page is "New Member Links". A "Missing Features?" group is at the bottom of the list, with "Install more apps" and "Pin apps...".
"Install more apps": click on this link to get a list of apps to install.
"Pin apps to navigation bar": Click on that link, and you will see a bunch
of apps, along with a pushpin icon. Click the icon, and the app will appear in the nav bar. (Requires a reload).
I think you need apps to do much of anything, so the above two steps are pretty important.
Anyway, for ~$100/year you can have an awesome social network with complete control over your own data. Not a bad deal.
Installation is a bit complex -- hubzilla itself is really pretty straightforward to install, but the infrastructure: a virtual host, a domain name with function dns, a web server with a LAMP stack, a let's encrypt certificate, email -- a bunch of fiddly details that need to be set up. I haven't done this stuff for a while, and there were small snags that I wouldn't have hit in my younger days...
Of course, people who live on their phones may not have that luxury, but I surf mainly from my desktop.

Murder in the garden....
Originally shared by Sawanya Prittipongpunt
Big-eyed bugs - Hemiptera (Geocoris) : มวนตาโต
ello.co: mosqueeto
pluspora.com: mosqueeto
diasp.org: mosqueeto
cake.co: mosqueeto
mastodon.social: mosqueeto
photog.social: mosqueeto
reddit.com: mosqueeto
tumblr.com: mosqueeto
social.isurf.ca: mosqueeto
C32767.blogger.com
https://medium.com/we-distribute/a-quick-guide-to-the-free-network-c069309f334
What kind of an organizational infrastructure is appropriate to support a long term replacement for gplus? For example, I could personally support a diaspora instance with perhaps a few dozen active users, using my personal funds and time. But that doesn't scale. Many of the efforts so far seem to be on an "if we build it they will come and the organizational infrastructure will evolve later" model.
Some kind of legal entity (corporation/foundation/trust) with an unlimited potential lifetime and a funding mechanism seems required. A governance mechanism to deal with bad actors, hopefully something other than autocracy, would be nice.
In a federated case this consideration applies to each instance individually, and to the overall software base. How do the various alternatives compare in this matter?
https://www.dailydot.com/debug/facebook-alternatives/
https://www.theguardian.com/technology/2018/mar/31/youve-decided-to-delete-facebook-but-what-will-you-replace-it-with
https://fossbytes.com/best-facebook-alternatives/
https://eluxemagazine.com/magazine/alternative-social-media-sites/
Google search on "alternative social media":
https://www.cnet.com/how-to/social-media-alternatives-to-facebook/
https://www.1and1.com/digitalguide/online-marketing/social-media/the-best-facebook-alternatives/
Diaspora, ello, and tumblr seem somewhat viable.
Hello…is this thing on? It’s been quite a while, but some of you may still be subscribed to the old RSS feed (is that still a thing?). If you do somehow find this post, I want to let you know that I just started posting again on the @bicycledesign Instagram account. I plan to update Instagram primarily going forward, but the Twitter and Facebook accounts will remain active too (mainly just auto-posting from Instagram).
Who knows… I may even start occasionally posting on the blog again too. I am sure that it would take quite a bit of time to rebuild the community after three years away, but I really do miss it, so the effort might be worthwhile. I don’t want to get ahead of myself though. For now. I’ll just see how it goes resurfacing on social media. I hope that you will follow along… and maybe even contribute some of your ideas and designs.
When rock'n'roll began it certainly didn't seem like album music either, it was dance music driven by singular hits, and it needed to come up with a way to make albums work as albums, a way to piece them together as wholes rather than just containing a few hits and a lot of fillers. And eventually this got worked out, the "fillers" evolved from fast attempts to reinvoke the hit formula over and over, to an open space for trying out a variety of different types of stuff, without necessarily trying to make hits. Not that it always worked, but there never really seemed to be a constant question of "how to translate rock music into album music" - the solution just sort of developed by itself and now appears to be an intuitive understanding. If you're setting out to make a rock album, a bunch of songs of different kinds, and it doesn't work, it's not because you haven't squared the album circle, it's simply because your material isn't good enough. With rave music - well, with the whole post acid electronic scene, really - that never got to be the case. It wasn't just a matter of making good stuff, you also had to make a proper album context for it.
So why didn't rave music find a way to turn into album music, if it wasn't simply because it was too ecstatic, too lost-in-the-here-and-now-of-the-dancefloor-experience, to make sense outside of that context? I think there's several reasons: First of all, and unlike rock, "electronic dance music" never really lost its identity as "dance" music - which is why I'm using that rather clunky name for it, rather than simply calling it "rave" or "techno", both of which have much more specific meanings for a lot people. This is something that the older generation of rock critics embraced wholeheartedly, because it meant that they could dismiss it as mere faddish functionality - not something you'd really need to pay attention to, let alone try to understand on a deeper level, because it didn't have a deeper level, it was just about stupid fun on the dancefloor and nothing more. And sadly, a lot of rave insiders fully accepted this traditional, rock derived idea of musical substance (authenticity, narrative), and took it as a badge of honour that rave didn't contain any of that. Rather than figuring out how to understand and describe the music on its own terms, the easy option was to stay in the reservation and just use it as a defence: Yeah, you old farts don't understand it because you have to experience it in the context of the dancefloor and the lights and bass and the drugs, to get it - it can't work on its own, so of course it doesn't sound good at home. As if there wasn't plenty of people who did listen to it at home, or got it fine without the drugs'n'dancefloor-context (I'm one of them, and I've got several friends for whom it worked just as well).

That one of the most common umbrella terms for everything from deep house to eurodance to hardcore techno is simply "dance music", is exactly the problem - suggesting not just that it's music that can be used for dancing (unlike rock or jazz or hip hop?), but rather that it's the only use, that listening to it is pointless. This leads directly to the second reason that rave never fully managed to turn into album music, namely the split between electronic "dance music" and electronic "listening music". The "electronic listening music" was seen as the obvious way to approach the album market, because albums are, after all, something you "listen to". As a result, far too much of this stuff deliberately avoided everything that had made rave music so great - the brutally inhuman machine structures, the raw synthetic sounds, the harsh viscerality - in favour of a polished and pleasing mood music that too often came of strangely regressive compared to the dancefloor-stuff. I remember that the whole "ambient"-movement seemed inexplicable and disappointing to me at the time - not because I didn't like ambient, but exactly because I already knew it well: Ambient was an old style by the nineties, and it appeared to me such a ridiculously regressive move to go back to that just when something as radically new and exciting as rave and techno was happening, I couldn't understand why anyone from the electronic scene would rather look back just when electronic music was at its most revolutionary peak ever, let alone how they could claim that something as safe and well-established as ambient was a new front line.
Not that the ambient/"electronic listening music" camp didn't produce some really good stuff, they certainly delivered their share of the brilliant and incredibly inventive music that made the first half of the nineties such a golden age, but there was also a huge amount of it that sounded as tame and regressive as you'd expect from a movement that used a return to decades old contemplative mood music as the "mature" and "sophisticated" response to the not-just-music of the kids. Most of all, though, both parts suffered from this divide, as it meant that almost everything was forced into one of the two options, and the synthesis needed to make it work as versatile album music never really materialized - the "listening" albums all too often ended up as far too long "mind journeys", lacking the straightforward buzz and urgency of rave, while the pure rave albums either became collections of hits+fillers, trying to repeat a successful formula over a whole album, or tried to create variation through awful "rave ballads" or annoying guest vocalists.
The need for variation was worsened by the third obstacle for rave as album music: That rave just happened to break through at a time where everybody thought that the CD had won over vinyl, especially as the album format of the future. Even though rave and techno probably were the most resilient vinyl strongholds of the nineties, they still subscribed to the logic of vinyl = singles and CDs = albums. The consequence was that far too many electronic albums from this time were simply too long for their own good - the 74 available minutes of the CD became a standard that you were expected to fill out, whether you had the material or inspiration for it - there were even buyers who felt "cheated" if they didn't get their "money's worth" (apparently it was of less importance if the music was any good, as long as they got more of it). As a result of the 74 minutes as the default album length, just making a collection of straightforward rave tracks became a much more problematic option.
The pioneering "album rock" of the sixties had the great advantage that no one expected it to exceed forty minutes (and even thirty minutes or less was perfectly acceptable and far from unusual), which meant that even relatively single-minded rock or pop albums rarely dragged on for too long and became monotonous. It's hard to imagine that most classic rock albums would have gained much by being twice as long - even if the hypothetical extra material was as good as the original stuff, it seems likely that the end result would eventually become to samey. The sharp and precise format of the classic rock LP simply meant that even more-or-less one trick ponies could get away with presenting a handful of slightly different variations of the one trick. In the CD age that approach became increasingly problematic, and releasing an album started to be seen more and more as some sort of "project", something that had to show ambition, versatility and the ability to envision - as well as fill - a vast canvas. Which is all in all not the most obvious way to go when doing a "rave" album.

One solution to this problem was focusing on EPs for straightforward rave/techno/hardcore-releases. Rather than having to live up to the expectations of "the album", you could simply do 4-6 tracks that were good enough to stand on their own, yet with sufficient variation and ideas among them for the record to work as a whole. Effectively hijacking 12" singles and turning them into the rave scenes equivalent of the "classic", "handful of songs" rock album, the early nineties was a golden age for the EP format. As for actual albums, though - rave albums as well as those with a deliberate "listening-oriented" ambient/IDM-approach (which also had their troubles with the 74 minutes) -, the problem was never really solved. Or rather, the best solutions seemed more to be down to pure "luck" - that is, simply having enough good and sufficiently varied material to pull of not doing anything but more of the same (i.e. the same "luck" that made many straightforward rock albums work, except that much more luck was needed in the CD age) - or some unique visionary twist that would only work with the specific style and approach of the artist coming up with it (this was of course more often the case with the IDM-leaning stuff - in the rave department, very few even remotely succeeded with this trick).
So, no one managed to invent a universally workable way to build albums out of rave music, but there was a lot of attempts, and a lot of them was actually really good - even if the majority was basically failures. Perhaps this is why I find them fascinating: The possibility of finding a good one isn't big, but it's all the more rewarding, and unexpected, when it happens. The overview below is almost certainly far from complete, I'm sure there's some obscurities, and perhaps even some obvious ones I'm unaware of, but I've tried to include as much as possible. I'm only looking at stuff that I consider "rave", which basically means music that is not trying to be deliberately "underground" and "clubby", i.e. no minimal techno, proto-IDM, deep or progressive house etc. - as well as stuff like straightforward bleep or acid. What I'm after is artists that either got to make albums because the already had hits, or made music that showed that they clearly tried to make hits, all while still staying within actual rave and techno (unlike crossover eurodance/pop like 2 Unlimited, Snap, Technotronic and their ilk). Which basically means that it's mostly within the breakbeat 'ardcore/Belgian techno/proto trance-spectrum, even though there's obviously some grey areas here and there. To avoid those as much as possible, I've also restricted myself to the years 1990-1992, because after that point, most artists had either chosen a specific subgenre path (jungle, trance, gabber), or they'd abandoned rave all together (there was some exceptions, of course, I'll look into some of the more interesting ones in the end of this list).
THE BRITISH SCENE: I've grouped the albums according to the regional scenes, starting with the British, which perhaps had the largest amount and broadest variety of rave albums. The breakbeat sound was almost uniquely British (a few continental producers used it occasionally, but basically as a deliberately added "British flavour"), and in retrospect, we think of it as being what early British rave music was, laying the foundation for jungle. But as we'll see, there was a lot more going on.

The Prodigy: Experience (1992)Pretty much the gold standard of rave albums, this is exactly how it should be done - presuming you've got the endless supply of explosive energy and inventive ideas that Liam Howlett had at this point. Despite being pretty much all hyper intense break beat 'ardcore from start to finish, Experience simply has so much going on, such an abundance of rhythmic-melodic twists and turns, that it never seems even remotely samey or single minded. It's only a slight exaggeration to say that every moment of Experience is a highlight, at the very least there's never any part of it that come off as uninspired filler material, and it's especially amazing how the tracks constantly morph and change direction - no part of them ever get a chance to grow dull or predictable before they're suddenly taken over by a new, insanely catchy hook leaping out of the speakers, taking the intensity and excitement to a new level. You'd think that this could become too much, but the greatest achievement of the album is perhaps that the sheer originality and freshness of the riffs and the way the tracks are structured, enables it to keep up that insane level of hyperactivity without ever getting exhausting or monotonous. There's the one more "experimental" track, "Weather Experience", which is perhaps not quite as memorable as the rest, but it still has enough jittery drive to not feel out of place. And the rest - well, whether it's stone cold classics like "Hyperspeed", "Charly", "Out of Space" and "Everybody in the Place", or tracks made especially for the album like "Jericho", "Wind it Up" or "Ruff in the Jungle Bizness", each of them is pretty much a mini breakbeat masterpiece in its own right, all while forming a whole that is even greater than the sum of its parts. It's hard to imagine it done better than this! Altern8: Full on - mask hysteria (1992)Much in the same vein as Experience, albeit still with some vestiges of Archer and Peats background in acid house/detroit techno/bleep (of which there were no traces in Howletts sound), and generally considered the lesser album. Which it is. It's not so much that it's closer to the hits-and-fillers-formula, it's rather that neither the hits (quite numerous, actually) nor the fillers are nowhere as good as The Prodigys hits and fillers. Most of the tracks are still pretty good, but they're also more or less standard run-of-the-mill breakbeat 'ardcore. Lots of catchy riffs and nice samples, and thankfully it never goes into crossover dance territory, so all in all not bad at all, it just doesn't really blow your mind, and in the end the genericness do get a bit longwinded in just the way that Experience so impressively manage to avoid. It definitely would have benefited from being a couple of tracks shorter. That said, it's perhaps a bit unfair to criticise it for not being of the same calibre as the definitive masterpiece of the form, and had Experience not existed, Full On - Mask Hysteria might be remembered as the purest album destillation of breakbeat rave: Far from perfect, but with plenty of the lumpen throw-away quality and cheap synthetic excitement that is all part of the charm.

Urban Hype: Conspiracy to Dance (1992)While Full On is better than its reputation, this is pretty much the platonic ideal of getting it wrong when it comes to British breakbeat rave. Known mostly for the novelty "toytown" hit "A Trip to Trumpton", you'd think Urban Hype would, at least partially, go for a playful and "tasteless" approach a la The Prodigy, but instead most of the tracks seems to aim for a slightly more "deep" vibe, or the most anonymously bland rave-pop-by-numbers combination of italo piano and soul divas. On tracks like "Relapsed" and "The Dream", the rave elements are still sufficiently raw and ecstatic to drown out the worst sirupy samples and keep them at least pretty exiting in the moment, but with "The Feeling", "Embolism" and "Living in a Fantasy", it's unfortunately the other way round, and they leave no other imprint on the memory than a slight nausea. And while the "deeper" tracks actually have some potential - especially "Teknologi part 2" and "Emotion" strike a good balance between groove, melody and atmospherics - for some reasons the lame divas and pianos are also tacked on here, as if Urabn Hype didn't actually believe that they would work as more moody pieces after all, and eventually ruining them in the process.
Shades of Rhythm: Shades (1991)Shades of Rhythm were unusually prolific on the album front, although a lot of the same tracks appeared on 1989's Frequency, 1991's Shades and 1992's The Album. Where the last one was pretty much just a slight update of Shades, Frequency is in a more rough, house/acid-derived bleep'n'breaks style, and not quite rave yet - despite the presence of the hits "Homicide" and "The Exorcist". In any case, I guess Shades is the "classic" Shades of Rhythm album, but unfortunately it gets it wrong in much the same way as the Urban Hype-album. There are some good tracks on it - in addition to the aforementioned hits there's also heavy bleep'n'breaks workouts like "The Scientist" and "666 - the no. of the bass" - but then there's also the archetypical potentially-great-but-ruined-by-soul-diva-sugar-coating of "Sweet Sensation", as well as far, far too much horrible soul-jazzy deep house-ish dreck like "Shakers", "The Sound of Eden" and "Lonely Days, Lonely Nights", making it a bit of an endurance test listening to the album as a whole.
The Hypnotist: Let Us Pray - the complete hypnotist 91-92 (1992)Refreshingly devoid of crossover dance and soulful "deepness", this is pretty much a collection of straightforward rave singles with a few unreleased tracks added, and drawing on both the continental brutalist sound as well as British breakbeats. While this is obviously a good thing, Let Us Pray unfortunately doesn't work quite as well as it could have, because even though none of the tracks are bad, a lot of them are also sort of generic and samey, and as a result it isn't able to stay exciting for the 80-minute playing time. Too many tracks follow the same minimally surprising structure, and lack the wellspring of original ideas and hooks that made Liam Howlett able to pull off an hour of the same hectic sound without ever getting boring. That said, and despite missing "The Ride" - arguably his greatest track ever -, there's plenty of classics here ("House is Mine", "Hardcore you know the score" and "God of the Universe" to name a few), and as such it works pretty great as a collection, an archival overview of Caspar Pounds contribution to rave music. It's just not something that it makes much sense to listen to as a whole. Had it been trimmed down to half the length, it could have been really great, but as it is, the sharp blast of the Hardcore ep is a much better suggestion for the definitive Hypnotist record.
Rhythmatic: Energy on Vinyl (1992)A wonderfully compact and straightforward little album - basically an EP extended to an eight track mini LP -, and more or less getting it right where Urban Hype and Shades of Rhythm got it wrong: There's plenty of stylistic variety, stretching from full on rave mania to more atmospheric sparseness, but it never degenerates to mainstream dance or tasteful, tedious "deepness", and it's never just doing styles-by-numbers. This might actually be the best thing about it - there's elements of breakbeat 'ardcore, bleep'n'bass, Belgian techno and even some house/Detroit vestiges here and there (it is on Network), but it's all mixed up into unique, constantly morphing concoctions, doing all sorts of weird and unexpected tricks and twists and never really being one single thing - except that it's all, in one way or another, rave music, pulsing with synthetic energy and jittery intensity. Sure, there's a few elements I could do without (the rapper on "Nu-Groove", the few examples of soul divas, i.e. the usual suspects), but exactly because the tracks change and evolve all the time, those elements never get stuck long enough to become annoying. Energy on Vinyl is one of the greatest overlooked gems of the early British rave scene.
N-Joi: Live in Manchester (1992)Despite being some of the most successful hitmakers on the British rave scene, N-Joi didn't release a proper album until 1995, the rather tame and polished, progressive-housey Inside Out, long after the heyday of their original signature sound. They did make this brilliant "live" mini-LP, though, a just-short-of-30-minutes non-stop barrage of breaks, hooks and buzzing riffs that seems like a much better shot at finding an effective rave album formula than most proper rave albums. Reputedly an after-the-fact studio reconstruction, Live in Manchester creates a cheap laboratory-facsimile of the "real" thing, a buffet of one-dimensional, yet thrillingly synthetic, empty rave calories.

Eon: Void Dweller (1992)Every time I start listening to this, I immediately think it's going to be a brilliant album, which is hardly surprising, given that it takes off with three of Eons catchiest tracks - "Bakset Case", "Inner Mind" and "Fear" - , all offering an abundance of exiting riffs and samples, as well as a highly original sound somewhere between the dominating rave of the day and a kind of proto big beat, as you might expect from something that involves J. Saul Kane. So why doesn't the album stay brilliant? Well, it's not just that the majority of the rest of the tracks are a bit more laid back and atmospheric (with a few exceptions, i.e. "Spice"), it's more that this means that they're less intent on being exciting, and the resulting slightly lower quota of wild sounds and samples draws your attention to the fact that the compositions in themselves haven't that much to offer - there aren't any really memorable hooks or melodies, or exiting structural ideas. Not that there has to be, of course, it's still an original and enjoyable album in many ways, it just doesn't grab the attention all the way as it should, and would have benefitted highly from being two or three tracks shorter.
Shut Up and Dance: Dance Before the Police Comes (1991)Ragga Twins: Reggae Owes Me Money (1991)Rum & Black: Without Ice (1991)I've talked about Ragga Twins and SUAD before, and they only tangentially belong here, given that rave elements only appear as parts of a broader, cross over-hybrid sound with an emphasis on vocals. There's some quite ravey tracks on them for sure, but there's even more where the rap is the main thing, and though enjoyable (well, mostly SUAD, I've never been completely convinced by the twins I must admit), neither Dance Before the Police Comes or Reggae Owes Me Money really work as rave albums. In comparison, Without Ice is a much "purer" album, in that it's pretty much proto breakbeat darkcore from start to finish. Not that it makes it one dimensional, there's both energetic rave, more sparse and gloomy atmospherics, as well as a lot of tracks that combine a bit of both. As such, I suppose it could have been great, but unfortunately, there's really not a lot of ideas in each of the many, many tracks - there's some good and interesting sample choices (Sakamoto, lots of Art of Noise), and the breaks are often treated so that they have a great, synthetic edge to them, but in the end, all tracks more or less just consist of a couple of simple, interchanging loops. It can sound great in smaller doses, but with a 50 minute playing time it eventually becomes a bit of a drag, where it's difficult to tell the tracks apart.
THE GERMAN SCENE: Despite being the only rave scene with a size and variety comparable to the British, early German rave has always been a bit overshadowed by its Belgian contemporaries, which had a few more hits of a broader, international impact. They do have a lot in common - in particular the EBM and new beat-elements, but in Germany that became even more pronounced and electroid, reflecting the importance of the early Frankfurt scene and its roots in new wave and industrial. Still, there's also many other elements present.
Time to Time: Im Wald der Träume (1991)Equal parts rave euphoria, EBM/synth wave-coldness and infantile German humour, Im Wald der Träume is still as charming and paradoxical as the first time I wrote about it. As much an absurdist deconstruction as perhaps the purest distillation of rave silliness around, it's an unique and wonderfully bizarre artefact from a time where it was still pretty open what rave could and should be - there isn't really anything else like it.

Twin EQ: The Megablast (1991)Much like Rhythmatics Energy on Vinyl, and as mentioned elsewhere, this is a short and intense LP that very much sound like it was slapped together in a hurry (as I'm sure it was, the Lissat/Zenker-duo was hyper-productive back then), and it contain no recognizable classics, yet it's all the more charming for it. In addition to plenty of buzzing riffs and stomping beats, there's also 8-bit elements ("Hardcore Keyboard") and even weird machinic acid ("Enjoy"), but in the end it all has a sort of clunky functionality that suggests that this was made by people who were fully aware that they were churning out soulless rave fodder, and just decided to have fun with it, revelling in the cheap, inauthentic-synthetic aesthetic.
Interactive: Intercollection (1992)Interestingly, and despite obviously trying to build on their previous hits, the debut album from Lissat and Zenkers most successful and well know project was not nearly as convincing as the practically forgotten Twin EQ-LP. In addition to "Who is Elvis", the first and most likely biggest in what would eventually become a series of gimmicky novelty-hits, many of the tracks here were minor hits on the early German rave scene, and exhibit the typical combination of EBM-coldness and Belgian brutalism (with new beat as the mediating factor). Arguably, the more minimal and restrained approach of this sound didn't work in Interactives favour, they didn't let loose with as many deliciously synthetic sounds and raw ideas as on The Megablast, though that is not to say that there isn't some pretty great tracks on The Intercollection - "The Techno Wave", "No Control" and "Dance Motherfucker" are all brilliant examples of the style - they just dolose some immediate freshness as they go on. The albums biggest problem, though, and the reason it isn't nearly as good as it could have been, is the fillers, which don't really offer much - again unlike The Megablast, which pretty much was nothing but exciting fillers.

Westbam: A Practising Maniac at Work (1992)Westbam is an interesting character in German rave history; one of its most successful and enduring DJs, figurehead and main brain behind the massive, epoch-defining Mayday mega raves, yet at the same time having a somewhat unusual background, more resembling the British DJ-tradition, raised on hip hop and proto-house, than the typical German EBM/new beat-route. Already a bit of a veteran in 1992, his previous two albums were more in a clear hip house-vein, and it would seem obvious that he would go into breakbeats when going full on rave with A Practising Maniac at Work, but instead, at least to some degree, he approached the more straightforwardly bruising, linear sound domineering the continental dancefloors at the time. Not that we're talking full on Belgian brutalism, there's still hip house vestiges and plenty of uplifting breaks and samples, but rather you get a kind of fusion between these elements and the harder, more cyber-cubist stuff. This actually makes it a somewhat original album, though not a very consistent one - some of the more housey and/or eclectically experimental tracks, like "Acid Snail Invasion" and "Street Corner", just goes nowhere. Westbam is definitely best when he's clearly trying to make fast paced dancefloor functionality or straight up anthems, and he is able to make invigorating and somewhat cheesy rave fodder if he wants to, but here there's a bit too little of that to make a really powerful album. It's still enjoyable, the boring stuff is not too dominant and there's thankfully no cringeworthy dance-crossover attempts, but as a pure rave album, it's too uneven.
U96: Das Boot (1992)I love the humour of the cover hyperbole: "The TV-advertised mega-seller album including at least 10 top-ten hits". Das Boot contain exactly ten tracks, with several clearly being fillers, included only to reach a playing time of at least a short album. But clocking in at slightly over 40 minutes is actually a benefit here - U96 doesn't have that many ideas, so the fast pace of the whole thing means that some weaker elements (mostly) doesn't become annoyances. As a result, this is definitely one of the best cashing-in-on-a-one-hit-wonder albums of the entire rave era. In addition to some actually convincing examples of slow and dreamy "atmospheric fillers", including an odd yet oddly charming "Moments in Love"-pastiche, Das Boot is basically abrasive Belgian brutalism tinged with a few whiffs of the colder, EBM-derived German sound, and except for the misnamed "Ambient Underworld", which could have been good but is completely ruined by lame rap and a histrionic soul sample - it delivers just the kind of great "more-of-the-same"-rave tracks that you usually hope for with this kind of album, but far too rarely get.
Time Modem: Transforming Tune (1992)By far the best album based on the EBM-derived rave sound, and basically just one of the greatest, most consistent and convincing solutions to the whole rave album conundrum. Eventually, at least on the shorter and more condensed vinyl version, it's only 50% rave tunes, with the rest being atmospheric mood tracks, but the two elements frame each other so brilliantly that it all just seems like a whole, simultaneously a blinding rave album that works as pure listening experience, and a sort of futuristic concept album that just happen to work as blinding rave music as well. The softer tracks are interesting in that they're not really excursions into already established moody electronics, like ambient or soft house, but mostly a further mutation of the elements used in the harder rave-tracks - epic chorus-pads, fanfare-like melody riffs, driving EBM sequencer-bass -, all turned dreamy and melancholically introverted. The bittersweet ambiguity is obviously a part of Time Modem's EBM/new wave-genes, and even permeates the rave tracks, most brilliantly on "Welcome to the 90's", the pinnacle of, and key to, the album. Over hectically opulent, yet mercilessly focused rave, an aloof voice embody naïve early nineties cyber-futuristic excitement, with sentences like "we don't need the sun anymore" and, as far as I can hear; "greed is the means to success", but countered by what sound like movie samples in German, shouting bitterly about the horrible state of the world, including the phrase "es ist alles luege" ("it is all lies"). I get the impression that Transforming Tune reflect the ambivalence felt by rave artists coming from older EBM/industrial-derived techno - on the one hand swept away by the hope and celebratory spirit of rave as the soundtrack to the future and a unified Germany, on the other hand still influenced by EBMs dystopian, cyber-punk view of technology. This seems further supported by the closing track "Space and Time", sort of a defeatist hymn to isolation and alienation through technology, with a melancholy girls voice uttering phrases like "take the headphone and flow away, want to be one with my sound" and "cannot forget my reality, eternally apart and loneliness" - a heartbreakingly precise prediction in many ways, and the perfect way to end an album that successfully manages to be invigorating rave, ambiguous electronic mood music, and conceptual futurism all at once! A lost gem if ever there was one.
New Scene: Waves (1992)Coming from the same scene as Time Modem, but with the EBM and new/cold/dark-wave elements much more dominant, this is only full-on rave music on two tracks - "Sucken" and "PSG 22" - while the rest is somewhere between proto trance and the same sort of slowly drifting, atmospheric not-quite-ambient that was also found on Transforming Tune. The more streamlined, dark and restrained sound makes Wavesa contemplative experience rather than a rave album, but I think it's still worth including here, not just because it pretty good on its own terms, but also because it represent a strange and unique alternative way to turn techno into home listening music, completely different from the path taken by the IDM and ambient scenes, and in many ways not really sounding like anything else: Beneath the dark and dreamy surface, the underlying structures of the tracks - the way they're built - is still clearly recognisable as early nineties continental rave techno. O: From Beyond (1992)The first album from one Martin Damm, who would eventually release countless records of almost all kinds of hardcore, rave and techno, under a plethora of pseudonyms like Biochip C, Search and Destroy and The Speed Freak. Though there aren't many remnants of it on his later releases, he started out in much the same EBM/new beat/electro-based area of continental rave as Time Modem and New Scene, and From Beyond is also a combination of old school German rave, proto trance and epic synthscapes. It's a bit on the long side, with a few less-than-inspired tracks, which I guess could partly be blamed on the fact that, unlike Transforming Tune and Waves, this is a CD-only release, and therefore doesn't benefit from being trimmed down to a more focused single LP, as those albums were. Still, there's a lot of really good stuff here, and especially the atmospheric synth tracks have an endearingly-dated period charm, though many would probably find their swelling pads and cod-epic melodies too much. Personally, I find this aspect of the album fascinating - as with the two previous ones, a glimpse of a completely forgotten road not travelled. Space Cube: Machine & Motion (1992) A bit of an outsider here, Space Cube started out making more ravey tracks, and are often remembered as one of the few early German rave acts to use breakbeats, but on the debut album Machine and Motion, they worked with a lot of different elements, including pounding techno and drifting acid, as well as - unfortunately - quite a bit of house, foreshadowing Ian Pooleys later career. The result is simultaneously varied and somewhat more "pure" than most records on this list - as in "not cheesy" and "closer to proper dark'n'deep minimal techno". Which eventually makes it a bit too "nice" and anonymous to my ears. There's some really good tracks on it, such as the 80Aum/T99-ish brutalist "Disruptive" and the UR-acid-spacey "Forbidden Planet", but as a whole there's too much good taste and relaxed smoothness for it to be really convincing as a rave album. Perhaps it could be seen as a good example of how there still weren't any clear genre borders at this time, and how an album from the rave/techno-scene could contain many different styles and ideas, and on those terms I guess it has merit - many of the softer tracks are not bad at all - but I'd still much rather listen to an album one dimensional "rave fodder" than this exercise in well-crafted style and diversity.
THE BELGIAN/DUTCH SCENE: It's strange that Belgium usually gets all the credit here, because the infamous brutalist sound was as much the responsibility of Dutch producers - several of the classics that are by convention considered Belgian (like Human Resource or 80 Aum) were indeed Dutch. Of course, the Belgians didhave the EBM and especially new beat-scenes to draw on, which could explain why - when it came to albums - they seemed a bit more prolific. In the end, though, the rave made in the two countries was generally so similar that it makes little sense separating them, which actually is a bit strange, considering that the Dutch producers had roots in hip hop and italo disco, rather than new beat/EBM, and eventually went on to create gabber, while the Belgian producers more or less disappeared from the techno map subsequently.
Human Resource: Dominating the World (1991)According to some guy on discogs, this was actually released prior to "Dominator" becoming the huge hit that it was, making it a very odd thing on this list: An album that produced a classic track, rather than being produced to cash in on an already established classic. Given the title, though, they were at the very least aware of the tracks potential, but in addition to that, there's actually a lot of quality stuff present here, especially in the beginning, with inventive and well produced, slightly more atmospheric (but still highly invigorating) tracks like "The Joke" and "Faces of the Moon". Unfortunately, much like Eons Void Dweller, as the album goes on the tracks become less interesting - still very well constructed, but more or less lacking really memorable hooks or sounds, or the general raw power usually associated with the Belgian sound that "Dominator" played such a big part in developing. Furthermore, remixes of "The Joke" and "Dominator" - nice as they are - makes the album longer than it had to be, and the lack of new ideas on the second half becomes even more obvious.
LA Style: The Album (1992)Of all the one hit rave wonders, LA Style had the biggest challenge with creating an album, and sadly, they weren't up for that challenge at all. Not only was "James Brown is Dead" the biggest and most iconic of all the brutalist hits, it was also based on such an idiosyncratic and immediately recognisable riff that it was pretty much impossible to expand upon it - it was painfully clear how the countless clones that sprouted overnight tried to recreate the exhilaration of the fanfare blast, and you'd recognise this right away. "James Brown is Dead" is simply impossible to take any further (the sound alone - it actually managed to give the impression of realizing the old joke: make everything lounder than everything else!), and it might seem a bit like a novelty-track in this way, but don't get me wrong; it's arguably the greatest rave track ever exactly because of this - rather than a "gimmick", it's a singular stroke of genius. And how do you cash in on that, without repeating yourself ad nauseam? Well, don't ask LA Style, because that's just what they did - pretty much every single track on The Album recycles the "James Brown"-riff in one form or another. At best it gets more baroque and exaggerated, as on "LA Style Theme" - though that track could pretty much be called a "James Brown is Dead"-remix -, other times it regress back into something slightly more italo-piano-ish, and far far too often there's added a hearty dose of the most generic early nineties dance rap-and-soul imaginable. A few moments have merit, but only when they're repeating what was already perfect on "James Brown is Dead" - that single is still all the LA Style you need, and nothing is added here that changes that.

T99: Children of Chaos (1992)Next to "James Brown is Dead" and "Dominator", T99s "Anasthasia" is probably the greatest belg-core hit of all time, but unfortunately the album based on it isn't much better than the two previous ones. It's not for lack of trying, though, as T99 clearly wants to make a varied and coherent whole, rather than just a bunch of "Anasthasia"-clones. Now, a bunch of "Anasthasia"-clones would probably have been a lot better than this mess of crossover rave and more or less successful attempts to make laid back tracks, but the latter actually dohave some merit, Patrick De Meyer and Oliver Abbeloos are excellent craftsmen and they know how to create a good tune with good futuristic sounds at low speeds as well (despite some annoyingly "musical" elements like the nauseating sax sample on "After Beyond"). The biggest problem on Children of Chaos is the insufferable amounts of tacked on vocals - the usual lame rapping and cringeworthy soul divas - that render otherwise brilliant rave tracks like "Maximizor", "Cardiac" and "Nocturne" almost unlistenable. And it's really a shame, for had the album kept to just raw synthetics, and perhaps scaled down the amount of atmospheric tracks slightly while developing some of the shorter rave sketches a bit, this could really have been the Belgian rave album. But then, you could say something similart about most of these.
Quadrophonia: Cozmic Jam (1991)Prior to his succes with T99, Oliver Abbeloos also had a couple of hits together with Lucien Foort as Qadrophonia. Containing many of the classic Belgian elements - exhilarating blasts of raw angular bombast - as well as a surprising amount of breakbeats, Cosmic Jam unfortunately also contain extremely dated rapping on all tracks but a few short interludes. I guess this was a conscious choice, trying to give them a bit more personality (much like with added vocalists of 2 Unlimited) bit it completely ruins what could otherwise have been a pretty good - if perhaps a bit long - rave album.
Pleasure Game: Le Dormeur (1991)Le DormeurI've talked about before, and though it still has its flaws - most tracks are slightly abbreviated, and there's a couple of somewhat uninspired "ambient" fillers - it also remains one of the most straightforward and convincing of the early nineties rave albums. Actually, of the Belgian albums, only one was better, and interestingly, that was made more or less by the same people.
DJ PC: 100%(1992)While DJPC was fronted by DJ Patrick Cools, the production team behind the project included Pleasure Games Jacky Meurisse and Bruno Van Garsse - both coming from the EBM/new beat-outfit SA42 - and with 100% they made the ultimate Belgian rave record, the album that Human Resource or LA Style or T99 should have made. Some tracks are better than others, but even when they're a bit too gimmicky ('Return of Tarzan', 'Di Da Da, Di Da Di Da Da'), or ridiculously bombastic-by-the-numbers ('Control Expansion'), they're still super effective, focused and exhilaratingly brutal. And the best tracks are just incredibly good, all ugly angular machine music, relentless mentasm madness, and not a guest vocalist or smooth'n'laid back track in sight.

Hypp & Krimson: Rave Sensation (1991)Containing tracks released under six different names, this is sometimes listed as a compilation. Everything is produced by the duo of Jeff Vanbockryck and Patrick Claesen, though, with different collaborators here and there. We get the instrumental versions for a couple of Miss Nicky Trax productions, well known Ravebusters-classics "Mitrax" and "Power Plant" (though for some reason the latter is accredited to Hypp & Krimson)showing their roots in new beat, and in addition several unknown (to me) gems in the same vein, like "Torsion", "Dreams Forever" (as Code Red) and "Liquid Empire" (as Cold Sensation) - heavy rave fodder that isn't all that inventive or catchy, but makes up for it with precision-locked efficiency. The weakest part is the more floaty, atmospheric offerings - one of the Nicky trax and the not very aptly named "Rave Banging" - they're not exactly bad, just pretty uninspired, and they do create a couple of dull drops in the overall energy flow. As a result, Rave Sensation doesn't completely live up to its name, but it's not too far off either, and certainly one of the more consistent of the belgian albums.
Holy Noise: Organoized Crime (1991) Perhaps the greatest of the Dutch rave producers, Holy Noise was where DJ Paul got his first real success with tracks like "The Nightmare" and especially "James Brown is Still Alive!!", his riposte to LA Style, before he became a key player on the emerging gabber scene. As such, he's one of the only Benelux producers to have a noteworthy career after the rave heyday, as well as perhaps the most obvious link between gabber and the early brutalist rave sound. Organoized Crime is one of the better lowland albums, even though it sort of disappoints because it could easily have been so much better. While no tracks are bad as such, and several are really great, there's also a couple that are a bit too mediocre - in particular more minimalist ones like "House Orgasm" and "The Noise", and since the album is much longer than it has to be, it just becomes a bit exhausting overall. If the two aforementioned tracks were kicked out, as well as one of the two versions of "Get Down Everybody", we'd have an absolute classic here, one for the rave album top ten. Instead, we get something that do contain a lot of good stuff, but eventually looses its steam before you're through.
OTHERS: Two (or perhaps rather two and a half) other local scenes needs mentioning, being big enough to eventually produce full length rave albums (that I've heard of). Yet they're very different - the Italians produced endless amounts of generic (and often brilliant) rave fodder, a variant of the brutalist sound more or less infused with elements of italo disco and italo house, while the rave proper produced by the American scene seemed like maverick attempts to participate in what was going on in Europe, rather than a reflection of an overall American sound. Finally, the odd Spanish "makina"-scene was arguably closest to the early German rave, with EBM and new wave still very clearly present.

Moby: Moby(1992)This amazing LP will be a bit of a surprise to anyone only familiar with the later emo-Moby - or anyone who think "Go" is representative of his early style. What you get is a tour de force of almost perfect rave intensity, with "Go" and the closing "Slight Return" being the only softcore tracks. Sure, the first half is by far the best, and there's a house piano here and a soul sample there that you could certainly do without, but there's only a few of these (and let's be fair - even The Prodigys Experience had a couple), and they're not prominent enough to do any real damage to what is otherwise a brilliant collection of tracks, action packed with jittery ideas, catchy sequencer riffs and dynamic twists and turns. Whether it's near-claustrophobic EBM-ish tightness ("Yeah", "Have You Seen My Baby"), strings'n'acid-driven brightness ("Help Me to Believe"), or explosive, unhinged rave-insanity (pretty much all of side A), none of the tracks are bad - and some of them are simply among the very best of the era. This is pretty much the only Moby album you need - but if you're after early nineties rave then you really do need it (and who'd have thought you really needed any Moby at all?). As rave-albums-that-works-as-albums go, Mobyis among the very best.
Oh-Bonic: Power Surge (1992)A brilliant little album that sadly seems to be completely forgotten. As far as I can figure out, Oh-Bonic was basically Omar Santana, who has followed a long and pretty weird trajectory through electronic dance music: Starting out as a part of Cutting Records early electro/house famil, and eventually ending up (last time I checked, anyway) producing bizarre "patriotic gabber" in the wake of 9.11 - presumably distancing himself from his earlier New York Terrorist-moniker, which I guess didn't seem that funny anymore. In between, however, there was both a more ordinary gabber/hardcore phase (much in the typical Industrial Strength/Brooklyn vein), as well as an earlier phase of awesome, brutalist rave - with Power Surge as the crowning achievement. Here, all the most obvious and effective rave elements are supercharged by generous inspiration - every track is jam packed with ideas and variation, constantly shifting and adding small electrifying details - as well as a knack for catchy riffs. Much like the early Prodigy in this respect, actually. The only downside is the tacked-on rap that makes a couple of otherwise excellent tracks seem cringeworthily dated - in one case made even worse by a liberal dose of soul diva samples. With the rest of the album being so pure in its super synthetic sound design, these attempts at adding a human emotional element just makes it seem much more mundane and backwards-looking. Not so much, though, that Power Surge isn't still a small gem in the same vein as Energy on Vinyl and The Megablast, well worth tracking down.
Digital Boy: Futuristik (1991)Digital Boy: Technologiko (1991)With two albums in one year, Digital Boy was one of the most prolific of the Italian producers, and probably the closest we get to a household name from that scene. In a lot of ways I really want to like the ambitious Futuristik double LP, it has a lot going for it: A good title and a ridiculous cover, a couple of really good, catchy rave tracks, and some unexpected oddball moments (a bleep'n'bass-ish xylophone-riff here, a playful rip off of Speedy J's 'Pullover' there, a live track that actually sounds live). And while there's also a lot of more uninspired, clumsy fillers that doesn't quite reach escape velocity, there's only a few tracks that are really awful (especially "Touch Me", a horrid attempt at a kind of smooooooth hip house torch song). The problem is that it just goes on for such a long time, which means that the mediocre stuff that would be acceptable in smaller doses on a more focused album, eventually becomes the defining character here, and rather than being invigorating like the best rave should be, it kind of loses all momentum in the long run. Luckily, Technologiko gets it right - a super condensed eight track mini-LP that just deliver functional rave fodder in the most hook-filled, simple and electrifying way. None of the tracks are lost classics, but there's no real duds either: They all work the formula brilliantly, and like The Megablast, Energy on Vinyl or Power Surge, the result is a record that captures raves single-minded, disposable hyper-excitement perfectly, exactly by having no other ambition than being a short, one-dimensional energy blast.

Bit-Max: Galaxy (1992)Pretty much as concentrated italo-techno as it gets: Generic-yet-explosive rave tools by a bunch of virtually unknown producers (the only ubiquitous one on the album being one Maurizio Pavesi), overdosing on all the most effective euro-rave elements (mentasm stabs, hypnotic EBM arpeggios, bombastic fanfare blasts - though, thankfully, no pianos), with half of it sounding suspiciously like something you've heard somewhere before. And - of course - there's several otherwise brilliant tracks that are ruined by relentless diva samples, which prevent Galaxy from being up there with the very best. But there's still plenty of good stuff to make it highly recommended to anyone into golden era rave at its most gloriously mercenary.
Teknika: Yo No Pienso en la Muerte (1991)This is the only example I've got from the Spanish "makina"-scene, basically a local take on EBM-influenced rave in the same vein as German acts like Time Modem and 'O'. I suppose there's a lot more out there, but at least for albums, I've not located any others (not that I've tried that hard, it must be said). In any case, it's an effective mini LP, where most tracks deliver a relentlessly propulsive, slightly more minimal and machinic version of the aforementioned German sound. A couple of melancholic tracks are thrown in for variety, sounding somewhat dated, but in a charming way - sort of instrumental "minimal wave" rather than the floating, complex mood pieces that the German acts were doing. A nice little album that holds together very well.
AFTERTHOUGHTS: By 1993, "rave" as an overall term had more or less disappeared, or rather, had split up into fully self sufficient and clearly distinct niches like darkcore/proto-jungle, trance, gabber/happy hardcore or even "techno", which had hitherto been an overall term used for pretty much all the rave forms, but now suddenly became more and more synonymous with pounding minimal functionalism. Still, there were some producers left who in one way or another continued making "rave" at a time where there wasn't really a scene for non-specialized rave any more. Whether they simply were a bit too slow following the changing landscape, just had some tracks left that would have been perfectly up to date a year prior, or deliberately tried to create a continuation of the general, all-encompassing rave spirit, I find these out-of-time rave albums fascinating, and deserving some mention. Indeed, some of them are truly brilliant in their own right.

GTO: Tip of the Iceberg (1993)In many ways a very sympathetic album, wrapping the classic "old faves + some new fillers"-formula in a pan-stylistic, almost meta-rave "unite-the-scene"-concept, at a time when the rave scene was busy splitting up. We more or less get everything from piledriving gabber and hardcore over cold, monolithic trance to softer, more housey tracks, sometimes with an almost bleep'n'bass-feel. The weird thing is how none of this sounds quite right, but rather like someone decided to make trance or gabber, rather than growing it organically from within a scene. This certainly gives Tip of the Iceberg an original sound, but it also means that it doesn't really manage to create the feel of rave-distilled-in-album-form that it seems to aim for. In particular, there's an awkward minimalistic restraint to it, at odds with the cutting-loose-and-going-mental effect you'd usually expect with these styles. This goes for the sound design - strangely polished and empty even in the raw'n'ruff hardcore tracks - as well as the compositions, where there aren't that many ideas or really memorable hooks around. It eventually becomes a problem for an album as long as this. In smaller doses - like side B with its metal machine gabber, or the more hit-oriented side C -, it's quite enjoyable, but as a whole, and especially with the two somewhat uninspired and monotonously minimal closing tracks, Tip of the Iceberg gets a bit tiring. Which is a shame for a record that so deliberately and head-on try to solve the rave-album-problem. That said, it remains refreshingly odd.
Sonic Experience: Def til Dawn (1993)An unabashed "meta rave" effort, where raw and ruff breakbeat tracks are interlaced with sound clips from open air raves (mostly police confrontations). The clips are sort of charming, I guess, but they do make the LP feel more like a kind of "historic document", rather than simply a great collection of generic-yet-invigorating 'ardcore at its most unpolished - which is basically what it is. But perhaps what Def til Dawn shows is that by 1993 this sound was already seen as something to look back upon nostalgically, as things had moved much further ahead. A great snapshot of an era that was over almost as soon as it had started.

Sonz of a Loop da Loop Era: Flowers in My Garden (1993)Just a mini-LP, but worth including here as it's the closest we get to a Sonz of Loop da Loop Era-album. Perhaps its shortness is an advantage, as all six tracks capture Danny Breaks at his b-boy derived best, filled to the brink with jittery riffs and hyperkinetic breaks (more proto big beat than 'ardcore on "Breaks Theme pt. 1", but still great), with no time to fall into any of the traps so many others fall into when having to deliver a full album. Flowers in My Garden might not be as catchy and relentless as Experience, but it's a pure distillation of 'ardcore at its most un-assumedly loose and playful.
Criminal Minds: Mind Bomb (1993)
The flipside to Sonz of a Loop's silly and colourful sound, Mind Bomb is a much more raw and aggressive take on 'ardcore. There's quite a lot of slightly (for its time) backwards-looking techno-elements, as well as plenty of proto-jungle, just not dominant enough for it to actually be jungle (unlike, say, Bay B Kane's Guardian od Ruff or A Guy Called Gerald's 28 Gun Badboy, which are arguably just on the other side of the divide - perhaps not fully developed jungle yet, but still closer to jungle than to breakbeat rave). None of the tracks are super memorable or lost gems, but neither are any of them bad - they're hectic, rough and hard-hitting examples of generic hip hop-influenced 'ardcore of the kind where the genericness is a crucial element, and as such it is perhaps the most "authentic" example of this music in album form.
After ten years of blogging at Bicycle Design, it's time for me to move on. This post may seem a bit like déjà vu to those of you who remember my "final" post from February of last year, but this time I really do need to shift my focus to other projects and goals. I won't repeat everything that I said in that previous post, but I do want to reshare the first paragraph:
"It is hard to pinpoint the exact reason that I started this blog in 2005. I could say that it was to showcase the work of industrial designers in the bicycle industry, or to give students a place to share their bike related ideas and concepts, or maybe the idea was just to generate discussion and get people thinking about the potential of bicycles, and other types of human powered machines, to change the world for the better. Over time, I believe the blog has served all of those purposes, but when I quickly put together that first post over eight years ago (over 10 now), I wasn't thinking any of those things. The fact is, this blog was something I started one day on a whim, and I never imagined that it would ever last so long… or reach so many people."
At its peak, Bicycle Design reached over 100,000 readers per month, and I am truly grateful for each of them (each of you I should say). Since 2005, I have connected with many wonderful people through my blog, including talented designers who share my passion for bikes and human powered transportation. Thank you to all who contributed designs, participated in discussions, or just followed along. In the decade that this blog has been a part of my life, I have made new friends and learned more than I could have ever imagined. It has been a great experience, and I wouldn't trade it for anything.
Though I have really enjoyed sharing other people's designs for the past 10 years, I am looking forward to making time for a few of my own art and design projects (outside of the work I get paid to do). I also want to refocus on local bike advocacy issues, something that I was heavily involved with in the past, but have not had much free time for lately.
Going forward, I will continue to blog occasionally at JCT.design (pay attention to the "Bikes/Active Transportation" category where I plan to share a few of those personal design projects that I mentioned earlier). You can also connect with me on Twitter, Instagram, and a few other places on the web at @jctdesign. I do plan to continue updating the Bicycle Design Facebook page when interesting bike designs catch my attention, so follow along there if you don't already. Finally, I will contribute to Core77 every once in a while, so keep an eye out for me on that site too.
If you haven't been reading Bicycle Design from the beginning (or even if you have), I encourage you to look back at the 10 years worth of archives from this blog. Recently, I shared the The top 25 posts from 10 years of Bicycle Design, and I would consider that post a good place to start. Beyond the ones that have been the most popular though, there are many other great designs to explore in the 900+ archived posts. If you have been reading Bicycle Design for a while, perhaps you can share one of your favorite older posts in the comments for others to check out.
Thanks again to all you who have contributed, commented, or just read along over the past 10 years. You are the reason that this blog was such a great experience for me, so I really do hope you will keep in touch. You know where to find me.
//
By now, anyone reading this has likely seen the recent redesign of the Specialized Venge aero road bike. In the development of the new Venge, Specialized engineers used Altair's HyperWorks software suite "to analyze and improve the aerodynamic performance of the bike as well as optimize the weight and structural efficiency of the frame." Chris Yu, Aerodynamics & Racing R&D Lead at Specialized Bicycle Components, talks a bit about the design process in an interesting video on the Altair website. To find out a bit more, I asked Mike Barton, computational fluid dynamics (CFD) application specialist at Altair, a few questions:
Before we get into the Specialized Venge project, can you give me a brief over view of Altair and the type of work that you do?
Founded in 1985, Altair is a software engineering and consulting company headquartered in Troy, Mich. After three decades, Altair has grown into a global company specializing in optimization and simulation technology with more than 2,600 employees and 5,000 clients. Altair's software is used by a diverse set of industries, from auto, consumer goods, aero, ship design and rail to heavy machinery. The company uses its expertise and extensive work in various commercial and industrial sectors to help designers and product developers in a wide range of unique industries, including bicycle design, golf club design, and other sporting equipment such as shoes.
The company is best known for HyperWorks, its suite of computer aided engineering programs that helps engineers simulate and optimize designs for light weighting; noise, vibration and harshness; durability; crash and safety and more.
In addition, via its Partner Alliance, Altair provides customers with access to more than 35 additional software products to help designers and engineers develop and optimize their designs and products.
In addition to developing the Fluid Dynamics software that was used to optimize the aerodynamics of the new Venge, did Altair work directly with the industrial designers and engineers at Specialized throughout the design and development of the bike? Tell us a little about the process.
Specialized has been working with Altair for approximately three years to streamline its internal processes.
In the video on Altair's website, Specialized Aerodynamicist Chris Yu mentions that the CFD software allowed them to run problems quickly without perfect CAD, and that those virtual tests of frame shapes saved a lot of time and money over building multiple prototypes for wind tunnel testing. Once the first physical prototypes were tested though, how consistent was the real world data with the results from the software?
We did not receive verification metrics from Specialized regarding the accuracy of their simulations compared to wind tunnel estimates. However, Altair Engineering has conducted similar studies related to bicycle wheel performance with works published by the American Institute of Aeronautics and Astronautics (AIAA 2009, 2010 and 2011). The following publication details the CFD analysis of several bicycle wheels for a range of speeds and yaw angles compared to wind tunnel measurements. The analysis demonstrates good comparisons of the CFD simulation to that of the experimental data and also demonstrates unique flow attributes that were not captured during the wind tunnel testing.
A Comparative Aerodynamic Study of Commercial Bicycle Wheels using CFD
An Aerodynamic Study of Bicycle Wheel Performance Using CFD
A bicycle (even a pro-level racing bike) is slow compared to an airplane or F1 car. How does the lower range of speed for intended use affect the way that you optimize the aerodynamics of the frame?
Although the speeds at which a professional bicycle rider performs are considerably lower compared to an aircraft or automobile, the aerodynamic effects are still very much present.
Consider that the typical drag coefficient (a non-dimensional quantity that is used to quantify the drag or resistance of an object in a fluid environment) of a professional cyclist crouched on top of a moving bicycle ranges between 0.7 and 0.9. Assuming that the bicycle is highly efficient and the rider is aerodynamically streamlined, let's estimate the Cd to be 0.83 and his/her frontal area to be 3.2ft2. If we consider that the rider is traveling at 20mph he/she feels approximately 3.27 pounds force from the aerodynamic loading acting to slow him/her down. In comparison, a typical touring bicycle and upright rider with a Cd of 1.0 and a frontal area of 4.3 ft2 feels approximately 4.4 pounds force at 20 mph[i].
Based on the aerodynamic effects and accounting for minor differences in rolling resistance, the touring rider must put in approximately 27 percent more horsepower to keep pace with the professional rider and bicycle. Contrast that with a highly efficient automobile, like the Toyota Prius that typically has a Cd of less than 0.25 and a frontal area of 23.4 ft2[ii]. At the rider's speed of 20mph, the aerodynamic drag force is approximately 5.98 pounds force. Although the scales at which vehicles travel are considerably different, they are all influenced by the fluid they move through and this must be taken into account when designing for efficiency and speed.
What about the effect of the rider on the aerodynamics? Did the CFD software factor in the messy aerodynamics of a moving cyclist, or were just the aerodynamics of the bicycle considered.
The primary focus of the CFD conducted by Specialized was related to frame design. The design and simulation did not include effects of the rider, but this is possible with Altair HyperWorks AcuSolve.
Altair has developed a proof of concept simulation to demonstrate the lower torso and legs of a manikin pedaling. This analysis was conducted with the use of two Altair applications within a single simulation. The "Co-Simulation" combines the aerodynamic analysis of AcuSolve CFD with the multi-body systems analysis tool MotionSolve. This unique combination of tools allows engineers to analyze arbitrary rigid body motion and the effects of that motion on the surrounding air.
From the proof of concept simulation, Altair has determined that the air movement in the wake of the rider's legs and the bicycle's pedals significantly affects the aerodynamic loading on the rear triangle and rear wheel of the bicycle. The simulation demonstrates that the air flow in this wake is highly transient and pulses with a frequency higher than that of the cyclist's cadence. As each pulse of air collides with the frame and rear wheel a force acts to slow the rider down and may cause minor instabilities depending on the oncoming wind and direction of travel. Although somewhat minute, these forces can accumulate and will have an effect on the performance of the bicycle as it moves with the rider.
In addition to aerodynamic concerns, Chris Yu mentioned the importance of maintain structural targets while minimizing weight (to the UCI limit). Can you talk about the structural "ply by ply" analysis of the carbon and explain its relationship to the fluid optimization work? Were these completely separate problems to address, or were they analyzed and optimized together?
These were analyzed and optimized together.
What else was interesting about this project compared to the work that you typically do?
One of the more interesting aspects Altair contributed to for the work at Specialized was the use of out-of-the-box tools for advanced aerodynamic design. Specialized made use of the Altair HyperWorks Virtual Wind Tunnel to design the Venge frame. This tool gives typical design engineers the power to perform CFD simulation with little or no experience with fluid dynamics. This is in contrast with other CFD tools that require significant knowledge related to fluid dynamics, numerical analysis, and computer science.
The Virtual Wind Tunnel allows engineers to directly import CAD into the simulation environment, apply simply boundary conditions to the tunnel walls and part, define the flow conditions and run the simulation on a modest workstation. The results are reported automatically or can be custom formatted to meet specifications desired by the design team. The overall process in Virtual Wind Tunnel has been developed and tuned for engineers to use as a part of the entire design cycle. The aim is to give engineers the tools needed to create many more design iterations with less time spent on traditional prototyping and testing.
Footnotes:
[i] Whitt, Frank Rowland; David Gordon Wilson;MIT Press, 1982 (2nd edn), 377 pages
ISBN 0262731541, 9780262731546
[ii] http://ecomodder.com/wiki/index.php/Vehicle_Coefficient_of_Drag_List
//
//
For his final project in Mechanical Engineering at the Danish Technical University, Israeli born Danish citizen Kuba Szankowski designed a velomobile based on the classic Leitra design. He worked closely on the project with Carl Georg Rasmussen of Leitra, who "at eighty years young still builds and maintains all the velomobiles, as well as most any other aspect of the business himself." Prior to earning his degree, Kuba had worked as a bicycle mechanic and as an advocate for bicycle transportation in Jerusalem, so he wanted to build on those experiences and create a design that would introduce human powered transportation to a wider audience.
He explains the idea behind his redesign of the Leitra:
"My main intent was to develop a bicycle fit for my mother (this is actually what I described in my exam, oddly enough)- a bicycle which cannot fall, with electrical assistance, which encapsulates the rider and protects by being visible and high above the ground. This, along with my own perspective of the ways with which the Leitra velomobile could be improved; these being the lack of ergonomic adjustability, its lack of customization for shared usage in a household and others brought me to my design.
I describe a bent aluminium chassis with a platform for a battery pack or other electrical components. The Leitra seat- a 900-gram fiber composite, originally designed by Carl G. by imprinting his bottom in the snow, was 3D scanned. The resulting file was used to develop the two seating positions, along with an adjustable crank mast.

Front view of seat upright and reclined
The vehicle is equipped with 24-inch wheels and a unique (to my knowledge) suspension system, which is a hybrid gas damper/ leaf spring. This is in comparison to the existing Leitra suspension, which is also unique (and field-tested!) double cantilever leaf spring, with about two cm displacement. What the existing suspension lacks is a dampening mechanism.
My analysis consisted mostly of Finite element modelling of the existing leaf spring and chassis. By having two models (the existing frame vs. the redesign), I was able to achieve what I called "independence from model discretisation". The viability of all this is of course subject for debate, and I welcome to share my findings with anyone interested.
In terms of this vehicle as a solution to the "blue ocean" problem, I envision three categories resulting; these are: the base tricycle, with no electrical assistance; a pedalec ("E-bike") class tricycle and an electrical vehicle capable of higher speeds (up to 45 km/h).
Kuba mentioned that he was interested to read the recent post by Karl Sparenberg of Windcheetah, whom he had the opportunity to meet, along with Mike Burrows, at the latest SPEZI festival. "I was happy to see a fellow engineer picking up the work of the masters and advancing it. I wish to find new audience for the Leitra, and bring these weird machines closer to my homeland and the world." Kuba will be presenting his design on October 30th at the 2015 Velomobile Seminar in Austria, so hopefully that will be the first step in making his design a reality. I am looking forward to seeing how it progresses.
(adsbygoogle = window.adsbygoogle || []).push({});
Subscribe to the email newsletter, and follow Bicycle Design on Facebook , Twitter , Pinterest , and Google + … and now on Instagram too!








