This Is What I Learned from Attending an Actual Flat-Earth Convention While flat earthers seem to trust and support scientific methods, what they don’t trust is scientists.

Leave a comment

Speakers recently flew in from around (or perhaps, across?) the earth for a three-day event held in Birmingham: the UK’s first ever public Flat Earth Convention. It was well attended, and wasn’t just three days of speeches and YouTube clips (though, granted, there was a lot of this). There was also a lot of team-building, networking, debating, workshops – and scientific experiments.

Yes, flat earthers do seem to place a lot of emphasis and priority on scientific methods and, in particular, on observable facts. The weekend in no small part revolved around discussing and debating science, with lots of time spent running, planning, and reporting on the latest set of flat earth experiments and models. Indeed, as one presenter noted early on, flat earthers try to “look for multiple, verifiable evidence” and advised attendees to “always do your own research and accept you might be wrong”.

While flat earthers seem to trust and support scientific methods, what they don’t trust is scientists, and the established relationships between “power” and “knowledge”. This relationship between power and knowledge has long been theorised by sociologists. By exploring this relationship, we can begin to understand why there is a swelling resurgence of flat earthers.

Power and knowledge

Let me begin by stating quickly that I’m not really interested in discussing if the earth if flat or not (for the record, I’m happily a “globe earther”) – and I’m not seeking to mock or denigrate this community. What’s important here is not necessarily whether they believe the earth is flat or not, but instead what their resurgence and public conventions tell us about science and knowledge in the 21st century.

Multiple competing models were suggested throughout the weekend, including “classic” flat earth, domes, ice walls, diamonds, puddles with multiple worlds inside, and even the earth as the inside of a giant cosmic egg. The level of discussion however often did not revolve around the models on offer, but on broader issues of attitudes towards existing structures of knowledge, and the institutions that supported and presented these models.

Flat earthers are not the first group to be sceptical of existing power structures and their tight grasps on knowledge. This viewpoint is somewhat typified by the work of Michel Foucault, a famous and heavily influential 20th century philosopher who made a career of studying those on the fringes of society to understand what they could tell us about everyday life.

He is well known, amongst many other things, for looking at the close relationship between power and knowledge. He suggested that knowledge is created and used in a way that reinforces the claims to legitimacy of those in power. At the same time, those in power control what is considered to be correct and incorrect knowledge. According to Foucault, there is therefore an intimate and interlinked relationship between power and knowledge.

At the time Foucault was writing on the topic, the control of power and knowledge had moved away from religious institutions, who previously held a very singular hold over knowledge and morality, and was instead beginning to move towards a network of scientific institutions, media monopolies, legal courts, and bureaucratised governments. Foucault argued that these institutions work to maintain their claims to legitimacy by controlling knowledge.

Ahead of the curve?

In the 21st century, we are witnessing another important shift in both power and knowledge due to factors that include the increased public platforms afforded by social media. Knowledge is no longer centrally controlled and – as has been pointed out in the wake of Brexit – the age of the expert may be passing. Now, everybody has the power to create and share content. When Michael Gove, a leading proponent of Brexit, proclaimed: “I think the people of this country have had enough of experts”, it would seem that he, in many ways, meant it.

It is also clear that we’re seeing increased polarisation in society, as we continue to drift away from agreed singular narratives and move into camps around shared interests. Recent PEW research suggests, for example, that 80% of voters who backed Hillary Clinton in the 2016 US presidential election – and 81% of Trump voters – believe the two sides are unable to agree on basic facts.

Despite early claims, from as far back as HG Well’s “world brain” essays in 1936, that a worldwide shared resource of knowledge such as the internet would create peace, harmony and a common interpretation of reality, it appears that quite the opposite has happened. With the increased voice afforded by social media, knowledge has been increasingly decentralised, and competing narratives have emerged.

HG Wells’ plan for a world encyclopedia. Scottbot

This was something of a reoccurring theme throughout the weekend, and was especially apparent when four flat earthers debated three physics PhD students. A particular point of contention occurred when one of the physicists pleaded with the audience to avoid trusting YouTube and bloggers. The audience and the panel of flat earthers took exception to this, noting that “now we’ve got the internet and mass communication … we’re not reliant on what the mainstream are telling us in newspapers, we can decide for ourselves”. It was readily apparent that the flat earthers were keen to separate knowledge from scientific institutions.

Flat earthers and populism

At the same time as scientific claims to knowledge and power are being undermined, some power structures are decoupling themselves from scientific knowledge, moving towards a kind of populist politics that are increasingly sceptical of knowledge. This has, in recent years, manifested itself in extreme ways – through such things as public politicians showing support for Pizzagate or Trump’s suggestions that Ted Cruz’s father shot JFK.

But this can also be seen in more subtle and insidious form in the way in which Brexit, for example, was campaigned for in terms of gut feelings and emotions rather than expert statistics and predictions. Science is increasingly facing problems with its ability to communicate ideas publicly, a problem that politicians, and flat earthers, are able to circumvent with moves towards populism.

Again, this theme occurred throughout the weekend. Flat earthers were encouraged to trust “poetry, freedom, passion, vividness, creativity, and yearning” over the more clinical regurgitation of established theories and facts. Attendees were told that “hope changes everything”, and warned against blindly trusting what they were told. This is a narrative echoed by some of the celebrities who have used their power to back flat earth beliefs, such as the musician B.O.B, who tweeted: “Don’t believe what I say, research what I say.”

In many ways, a public meeting of flat earthers is a product and sign of our time; a reflection of our increasing distrust in scientific institutions, and the moves by power-holding institutions towards populism and emotions. In much the same way that Foucault reflected on what social outcasts could reveal about our social systems, there is a lot flat earthers can reveal to us about the current changing relationship between power and knowledge. And judging by the success of this UK event – and the large conventions planned in Canada and America this year – it seems the flat earth is going to be around for a while yet.

By Harry T Dyer/The Conversation

Posted by The NON-Conformist

Advertisements

Three black teens are finalists in a NASA competition. Hackers spewing racism tried to ruin their odds.

Leave a comment


From left, India Skinner, Mikayla Sharrieff and Bria Snell, 11th graders from Banneker High School in Washington, are finalists in a NASA youth science competition. (Evelyn Hockstein/for The Washington Post)

The three D.C. students couldn’t believe the news. They’d developed a method to purify lead-contaminated water in school drinking fountains, and NASA announced last month that they were finalists in the agency’s prestigious high school competition — the only all-black, female team to make it that far.

“Hidden figures in the making,” one of the teens wrote in a celebratory text message to her teammates and coaches, a reference to the 2016 movie about the true story of three African American women who worked for NASA in the 1960s.

The next stage of the science competition included public voting, and the Banneker High School students — Mikayla Sharrieff, India Skinner and Bria Snell, all 17-year-old high school juniors — turned to social media to promote their project.

But while the teens were gaining traction on social media and racking up votes, users on 4chan — an anonymous Internet forum where users are known to push hoaxes and spew racist and homophobic comments — were trying to ensure the students wouldn’t win.

The anonymous posters used racial epithets, argued that the students’ project did not deserve to be a finalist and said that the black community was voting for the teens only because of their race. They urged people to vote against the Banneker trio, and one user offered to put the topic on an Internet thread about President Trump to garner more attention. They recommended computer programs that would hack the voting system to give a team of teenage boys a boost.

NASA said in a statement that voting was compromised, prompting it to shut down public voting earlier than expected. The federal space agency said it encourages the use of social media to build support for projects but wrote in a statement Tuesday that public voting was ended because people “attempted to change the vote totals.”

“Unfortunately, it was brought to NASA’s attention yesterday that some members of the public used social media, not to encourage students . . . but to attack a particular student team based on their race and encourage others to disrupt the contest and manipulate the vote, and the attempt to manipulate the vote occurred shortly after those posts,” the NASA statement read.

“NASA continues to support outreach and education for all Americans, and encourages all of our children to reach for the stars.”

The federal agency named eight finalists — including the Banneker group — and said it will announce the winners this month. In addition to the public voting, judges assess the projects to determine the winners, who are invited to NASA’s Goddard Space Flight Center in Greenbelt, Md., for two days of workshops, with the winning team receiving a $4,000 stipend to cover expenses.

Sharrieff, Skinner and Snell did not talk about the controversies tainting the voting but said in interviews Tuesday that they are excited about the positive attention their project has received from classmates, the D.C. community and even strangers on social media.

Prominent black activists and organizations, including one of the leaders of the Women’s March, helped spread the word about the competition, saying that black women are underrepresented in science and that the public should help propel the Banneker students to the top of the competition.

One of Sharrieff’s tweets urging her followers to vote for the project was retweeted more than 2,000 times. And someone even set up an online fundraiser for college scholarships for the teens.

“In the STEM field, we are underrepresented,” Sharrieff said, referring to the widely used acronym for the science, technology, engineering and math fields. “It’s important to be role models for a younger generation who want to be in the STEM field but don’t think they can.”

The NASA competition called on students to find creative ways to use space technology in their everyday lives. The teens said they considered dozens of ideas but settled on a water purification system because they noticed some water fountains in their school could not be used because of potential lead contamination.

They worked at the Inclusive Innovation Incubator — a technology lab focused on diversity and entrepreneurship near Howard University — where they volunteer, and their mentor at the incubator encouraged them to compete and supervised them on weekends as they built a prototype.

The teens purchased two jars, placing meters in each to test the purity of the water. In one jar, the teens place shards of copper in the water — with the copper acting as the experimental contaminant. An electric fan spins the water while filtering floss — a type of fiber — collects contaminated particles. Once clean, the water is transferred by a straw into the second jar. The meters verify that the water is clean, and the teens said they trust their system so much, they drank the water.

The filtration system is based on NASA technology used to develop automatic pool purifiers.

“Ours actually shows you that the water you are drinking is clean,” Snell said.

Sharrieff, Snell and Skinner, who are all on the cheerleading team, said they plan to go to college and pursue careers rooted in science.

Skinner wants to be a pediatric surgeon, Sharrieff aims to be a biomedical engineer, and Snell hopes to be an anesthesiologist.

“The popular norm is sports and modeling and advertising,” Skinner said. “And for people to see our faces, and see we’re just regular girls, and we want to be scientists.”

By Perry Stein/WashingtonPost

Posted by The NON-Conformist

The Era of Fake Video Begins The digital manipulation of video may make the current era of “fake news” seem quaint.

Leave a comment

Edmon de Haro

In a dank corner of the internet, it is possible to find actresses from Game of Thrones or Harry Potter engaged in all manner of sex acts. Or at least to the world the carnal figures look like those actresses, and the faces in the videos are indeed their own. Everything south of the neck, however, belongs to different women. An artificial intelligence has almost seamlessly stitched the familiar visages into pornographic scenes, one face swapped for another. The genre is one of the cruelest, most invasive forms of identity theft invented in the internet era. At the core of the cruelty is the acuity of the technology: A casual observer can’t easily detect the hoax.
This development, which has been the subject of much hand-wringing in the tech press, is the work of a programmer who goes by the nom de hack “deepfakes.” And it is merely a beta version of a much more ambitious project. One of deepfakes’s compatriots told Vice’s Motherboard site in January that he intends to democratize this work. He wants to refine the process, further automating it, which would allow anyone to transpose the disembodied head of a crush or an ex or a co-worker into an extant pornographic clip with just a few simple steps. No technical knowledge would be required. And because academic and commercial labs are developing even more-sophisticated tools for non-pornographic purposes—algorithms that map facial expressions and mimic voices with precision—the sordid fakes will soon acquire even greater verisimilitude.

The internet has always contained the seeds of postmodern hell. Mass manipulation, from clickbait to Russian bots to the addictive trickery that governs Facebook’s News Feed, is the currency of the medium. It has always been a place where identity is terrifyingly slippery, where anonymity breeds coarseness and confusion, where crooks can filch the very contours of selfhood. In this respect, the rise of deepfakes is the culmination of the internet’s history to date—and probably only a low-grade version of what’s to come.
Vladimir Nabokov once wrote that reality is one of the few words that means nothing without quotation marks. He was sardonically making a basic point about relative perceptions: When you and I look at the same object, how do you really know that we see the same thing? Still, institutions (media, government, academia) have helped people coalesce around a consensus—rooted in a faith in reason and empiricism—about how to describe the world, albeit a fragile consensus that has been unraveling in recent years. Social media have helped bring on a new era, enabling individuated encounters with the news that confirm biases and sieve out contravening facts. The current president has further hastened the arrival of a world beyond truth, providing the imprimatur of the highest office to falsehood and conspiracy.

But soon this may seem an age of innocence. We’ll shortly live in a world where our eyes routinely deceive us. Put differently, we’re not so far from the collapse of reality.
We cling to reality today, crave it even. We still very much live in Abraham Zapruder’s world. That is, we venerate the sort of raw footage exemplified by the 8 mm home movie of John F. Kennedy’s assassination that the Dallas clothier captured by happenstance. Unedited video has acquired an outsize authority in our culture. That’s because the public has developed a blinding, irrational cynicism toward reporting and other material that the media have handled and processed—an overreaction to a century of advertising, propaganda, and hyperbolic TV news. The essayist David Shields calls our voraciousness for the unvarnished “reality hunger.”
Scandalous behavior stirs mass outrage most reliably when it is “caught on tape.” Such video has played a decisive role in shaping the past two U.S. presidential elections. In 2012, a bartender at a Florida fund-raiser for Mitt Romney surreptitiously hit record on his camera while the candidate denounced “47 percent” of Americans—Obama supporters all—as enfeebled dependents of the federal government. A strong case can be made that this furtively captured clip doomed his chance of becoming president. The remarks almost certainly would not have registered with such force if they’d merely been scribbled down and written up by a reporter. The video—with its indirect camera angle and clink of ambient cutlery and waiters passing by with folded napkins—was far more potent. All of its trappings testified to its unassailable origins.

Donald Trump, improbably, recovered from the Access Hollywood tape, in which he bragged about sexually assaulting women, but that tape aroused the public’s passions and conscience like nothing else in the 2016 presidential race. Video has likewise provided the proximate trigger for many other recent social conflagrations. It took extended surveillance footage of the NFL running back Ray Rice dragging his unconscious wife from a hotel elevator to elicit a meaningful response to domestic violence from the league, despite a long history of abuse by players. Then there was the 2016 killing of Philando Castile by a Minnesota police officer, streamed to Facebook by his girlfriend. All the reports in the world, no matter the overwhelming statistics and shattering anecdotes, had failed to provoke outrage over police brutality. But the terrifying broadcast of his animalistic demise in his Oldsmobile rumbled the public and led politicians, and even a few hard-line conservative commentators, to finally acknowledge the sort of abuse they had long neglected.

That all takes us to the nub of the problem. It’s natural to trust one’s own senses, to believe what one sees—a hardwired tendency that the coming age of manipulated video will exploit. Consider recent flash points in what the University of Michigan’s Aviv Ovadya calls the “infopocalypse”—and imagine just how much worse they would have been with manipulated video. Take Pizzagate, and then add concocted footage of John Podesta leering at a child, or worse. Falsehoods will suddenly acquire a whole new, explosive emotional intensity.
But the problem isn’t just the proliferation of falsehoods. Fabricated videos will create new and understandable suspicions about everything we watch. Politicians and publicists will exploit those doubts. When captured in a moment of wrongdoing, a culprit will simply declare the visual evidence a malicious concoction. The president, reportedly, has already pioneered this tactic: Even though he initially conceded the authenticity of the Access Hollywood video, he now privately casts doubt on whether the voice on the tape is his own.

In other words, manipulated video will ultimately destroy faith in our strongest remaining tether to the idea of common reality. As Ian Goodfellow, a scientist at Google, told MIT Technology Review, “It’s been a little bit of a fluke, historically, that we’re able to rely on videos as evidence that something really happened.”
The collapse of reality isn’t an unintended consequence of artificial intelligence. It’s long been an objective—or at least a dalliance—of some of technology’s most storied architects. In many ways, Silicon Valley’s narrative begins in the early 1960s with the International Foundation for Advanced Study, not far from the legendary engineering labs clumped around Stanford. The foundation specialized in experiments with LSD. Some of the techies working in the neighborhood couldn’t resist taking a mind-bending trip themselves, undoubtedly in the name of science. These developers wanted to create machines that could transform consciousness in much the same way that drugs did. Computers would also rip a hole in reality, leading humanity away from the quotidian, gray-flannel banality of Leave It to Beaver America and toward a far groovier, more holistic state of mind. Steve Jobs described LSD as “one of the two or three most important” experiences of his life.

Fake-but-realistic video clips are not the end point of the flight from reality that technologists would have us take. The apotheosis of this vision is virtual reality. VR’s fundamental purpose is to create a comprehensive illusion of being in another place. With its goggles and gloves, it sets out to trick our senses and subvert our perceptions. Video games began the process of transporting players into an alternate world, injecting them into another narrative. But while games can be quite addictive, they aren’t yet fully immersive. VR has the potential to more completely transport—we will see what our avatars see and feel what they feel. Several decades ago, after giving the nascent technology a try, the psychedelic pamphleteer Timothy Leary reportedly called it “the new LSD.”

Life could be more interesting in virtual realities as the technology emerges from its infancy; the possibilities for creation might be extended and enhanced in wondrous ways. But if the hype around VR eventually pans out, then, like the personal computer or social media, it will grow into a massive industry, intent on addicting consumers for the sake of its own profit, and possibly dominated by just one or two exceptionally powerful companies. (Facebook’s investments in VR, with its purchase of the start-up Oculus, is hardly reassuring.)
The ability to manipulate consumers will grow because VR definitionally creates confusion about what is real. Designers of VR have described some consumers as having such strong emotional responses to a terrifying experience that they rip off those chunky goggles to escape. Studies have already shown how VR can be used to influence the behavior of users after they return to the physical world, making them either more or less inclined to altruistic behaviors.

Researchers in Germany who have attempted to codify ethics for VR have warned that its “comprehensive character” introduces “opportunities for new and especially powerful forms of both mental and behavioral manipulation, especially when commercial, political, religious, or governmental interests are behind the creation and maintenance of the virtual worlds.” As the VR pioneer Jaron Lanier writes in his recently published memoir, “Never has a medium been so potent for beauty and so vulnerable to creepiness. Virtual reality will test us. It will amplify our character more than other media ever have.”

Perhaps society will find ways to cope with these changes. Maybe we’ll learn the skepticism required to navigate them. Thus far, however, human beings have displayed a near-infinite susceptibility to getting duped and conned—falling easily into worlds congenial to their own beliefs or self-image, regardless of how eccentric or flat-out wrong those beliefs may be. Governments have been slow to respond to the social challenges that new technologies create, and might rather avoid this one. The question of deciding what constitutes reality isn’t just epistemological; it is political and would involve declaring certain deeply held beliefs specious.
Few individuals will have the time or perhaps the capacity to sort elaborate fabulation from truth. Our best hope may be outsourcing the problem, restoring cultural authority to trusted validators with training and knowledge: newspapers, universities. Perhaps big technology companies will understand this crisis and assume this role, too. Since they control the most-important access points to news and information, they could most easily squash manipulated videos, for instance. But to play this role, they would have to accept certain responsibilities that they have so far largely resisted.

In 2016, as Russia used Facebook to influence the American presidential election, Elon Musk confessed his understanding of human life. He talked about a theory, derived from an Oxford philosopher, that is fashionable in his milieu. The idea holds that we’re actually living in a computer simulation, as if we’re already characters in a science-fiction movie or a video game. He told a conference, “The odds that we’re in ‘base reality’ is one in billions.” If the leaders of the industry that presides over our information and hopes to shape our future can’t even concede the existence of reality, then we have little hope of salvaging it.

By Franklin Foer/TheAtlantic

Posted by The NON-Conformist

His 2020 Campaign Message: The Robots Are Coming

Leave a comment

Andrew Yang, a New York businessman who wants to be the Democrats’ next presidential candidate, believes that automation threatens to bring Great Depression-level unemployment and violent unrest. Credit Guerin Blask for The New York Times

Among the many, many Democrats who will seek the party’s presidential nomination in 2020, most probably agree on a handful of core issues: protecting DACA, rejoining the Paris climate agreement, unraveling President Trump’s tax breaks for the wealthy.

Only one of them will be focused on the robot apocalypse.

That candidate is Andrew Yang, a well-connected New York businessman who is mounting a longer-than-long-shot bid for the White House. Mr. Yang, a former tech executive who started the nonprofit organization Venture for America, believes that automation and advanced artificial intelligence will soon make millions of jobs obsolete — yours, mine, those of our accountants and radiologists and grocery store cashiers. He says America needs to take radical steps to prevent Great Depression-level unemployment and a total societal meltdown, including handing out trillions of dollars in cash.

“All you need is self-driving cars to destabilize society,” Mr. Yang, 43, said over lunch at a Thai restaurant in Manhattan last month, in his first interview about his campaign. In just a few years, he said, “we’re going to have a million truck drivers out of work who are 94 percent male, with an average level of education of high school or one year of college.”

“That one innovation,” he continued, “will be enough to create riots in the street. And we’re about to do the same thing to retail workers, call center workers, fast-food workers, insurance companies, accounting firms.”

 Alarmist? Sure. But Mr. Yang’s doomsday prophecy echoes the concerns of a growing number of labor economists and tech experts who are worried about the coming economic consequences of automation. A 2017 report by McKinsey & Company, the consulting firm, concluded that by 2030 — three presidential terms from now — as many as one-third of American jobs may disappear because of automation. (Other studies have given cheerier forecasts, predicting that new jobs will replace most of the lost ones.)
Photo

Mr. Yang has proposed monthly payments of $1,000 for every American from age 18 to 64. “I’m a capitalist,” he said, “and I believe that universal basic income is necessary for capitalism to continue.” Credit Guerin Blask for The New York Times

Perhaps it was inevitable that a tech-skeptic candidate would try to seize the moment. Scrutiny of tech companies like Facebook and Google has increased in recent years, and worries about monopolistic behavior, malicious exploitation of social media and the addictive effects of smartphones have made a once-bulletproof industry politically vulnerable. Even industry insiders have begun to join the backlash.

To fend off the coming robots, Mr. Yang is pushing what he calls a “Freedom Dividend,” a monthly check for $1,000 that would be sent to every American from age 18 to 64, regardless of income or employment status. These payments, he says, would bring everyone in America up to approximately the poverty line, even if they were directly hit by automation. Medicare and Medicaid would be unaffected under Mr. Yang’s plan, but people receiving government benefits such as the Supplemental Nutrition Assistance Program could choose to continue receiving those benefits, or take the $1,000 monthly payments instead.

The Freedom Dividend isn’t a new idea. It’s a rebranding of universal basic income, a policy that has been popular in academic and think-tank circles for decades, was favored by the Rev. Dr. Martin Luther King Jr. and the economist Milton Friedman, and has more recently caught the eye of Silicon Valley technologists. Elon Musk, Mark Zuckerberg and the venture capitalist Marc Andreessen have all expressed support for the idea of a universal basic income. Y Combinator, the influential start-up incubator, is running a basic income experiment with 3,000 participants in two states.

Despite its popularity among left-leaning academics and executives, universal basic income is still a leaderless movement that has yet to break into mainstream politics. Mr. Yang thinks he can sell the idea in Washington by framing it as a pro-business policy.

“I’m a capitalist,” he said, “and I believe that universal basic income is necessary for capitalism to continue.”

Mr. Yang, a married father of two boys, is a fast-talking extrovert who wears the nu-executive uniform of a blazer and jeans without a tie. He keeps a daily journal of things he’s grateful for, and peppers conversations with business-world catchphrases like “core competency.” After graduating from Brown University and Columbia Law School, he quit his job at a big law firm and began working in tech. He ran an internet start-up that failed during the first dot-com bust, worked as an executive at a health care start-up and helped build a test-prep business that was acquired by Kaplan in 2009, netting him a modest fortune.

He caught the political bug after starting Venture for America, an organization modeled after Teach for America that connects recent college graduates with start-up businesses. During his travels to Midwestern cities, he began to connect the growth of anti-establishment populism with the rise of workplace automation.

“The reason Donald Trump was elected was that we automated away four million manufacturing jobs in Michigan, Ohio, Pennsylvania and Wisconsin,” he said. “If you look at the voter data, it shows that the higher the level of concentration of manufacturing robots in a district, the more that district voted for Trump.”

Mr. Yang’s skepticism of technology extends beyond factory robots. In his campaign book, “The War on Normal People,” he writes that he wants to establish a Department of the Attention Economy in order to regulate social media companies like Facebook and Twitter. He also proposes appointing a cabinet-level secretary of technology, based in Silicon Valley, to study the effects of emerging technologies.

Critics may dismiss Mr. Yang’s campaign (slogan: “Humanity First”) as a futurist vanity stunt. The Democratic pipeline is already stuffed with would-be 2020 contenders, most of whom already have the public profile and political experience that Mr. Yang lacks — and at least one of whom, Senator Bernie Sanders, has already hinted at support for a universal basic income.

Opponents of universal basic income have also pointed to its steep price tag — an annual outlay of $12,000 per American adult would cost approximately $2 trillion, equivalent to roughly half of the current federal budget — and the possibility that giving out free money could encourage people not to work. These reasons, among others, are why Hillary Clinton, who considered adding universal basic income to her 2016 platform, concluded it was “exciting but not realistic.”

“In our political culture, there are formidable political obstacles to providing cash to working-age people who aren’t employed, and it’s unlikely that U.B.I. could surmount them,” Robert Greenstein, the president of the Center on Budget and Policy Priorities, a Washington research group, wrote last year.

But Mr. Yang thinks he can make the case. He has proposed paying for a basic income with a value-added tax, a consumption-based levy that he says would raise money from companies that profit from automation. A recent study by the Roosevelt Institute, a left-leaning policy think-tank, suggested that such a plan, paid for by a progressive tax plan, could grow the economy by more than 2 percent and provide jobs for 1.1 million more people.

“Universal basic income is an old idea,” Mr. Yang said, “but it’s an old idea that right now is uniquely relevant because of what we’re experiencing in society.”

Mr. Yang’s prominent supporters include Andy Stern, a former leader of Service Employees International Union, who credited him with “opening up a discussion that the country’s afraid to have.” His campaign has also attracted some of Silicon Valley’s elites. Tony Hsieh, the chief executive of Zappos, is an early donor to Mr. Yang’s campaign, as are several venture capitalists and high-ranking alumni of Facebook and Google.

Mr. Yang, who has raised roughly $130,000 since filing his official paperwork with the Federal Election Commission in November, says he will ultimately raise millions from supporters in the tech industry and elsewhere to supplement his own money.

Mr. Yang has other radical ideas, too. He wants to appoint a White House psychologist, “make taxes fun” by turning April 15 into a national holiday and put into effect “digital social credits,” a kind of gamified reward system to encourage socially productive behavior. To stem corruption, he suggests increasing the president’s salary to $4 million from its current $400,000, and sharply raising the pay of other federal regulators, while barring them from accepting paid speaking gigs or lucrative private-sector jobs after leaving office.

And although he said he was socially liberal, he admitted that he hadn’t fully developed all his positions. (On most social issues, Mr. Yang said, “I believe what you probably think I believe.”)

The likelihood, of course, is that Mr. Yang’s candidacy won’t end with a parade down Pennsylvania Avenue. Still, experts I spoke with were glad to have him talking about the long-term risks of automation, at a time when much of Washington is consumed with the immediate and visible.

Erik Brynjolfsson, the director of M.I.T.’s Initiative on the Digital Economy and a co-author of “The Second Machine Age,” praised Mr. Yang for bringing automation’s economic effects into the conversation.

“This is a serious problem, and it’s going to get a lot worse,” Mr. Brynjolfsson said. “In every election for the next 10 or 20 years, this will become a more salient issue, and the candidates who can speak to it effectively will do well.”

Mr. Yang knows he could sound the automation alarm without running for president. But he feels a sense of urgency. In his view, there’s no time to mess around with think-tank papers and “super PACs,” because the clock is ticking.

“We have five to 10 years before truckers lose their jobs,” he said, “and all hell breaks loose.”

By Kevin Roose/NYTimes

Posted by The NON-Conformist

Meet The Young Robotics Entrepreneur Who Got A Dream Deal With Apple

Leave a comment

https___blogs-images.forbes.com_parmyolson_files_2018_01_IMG_0627-1200x1014photo by Parmy Olson/Silas Adekunle is the co-founder and CEO of Reach Robotics.

Behind the bar of San Francisco’s Four Seasons Hotel last March, a set of doors led into an empty cafe room where an unknown, 25-year-old British-Nigerian entrepreneur named Silas Adekunle was about to meet a senior executive at Apple.

If he was nervous, Adekunle didn’t show it.

He smiled and opened up a large suitcase. It was filled with colorful robots that looked at first like toys. When he took one out and set it on the floor, it came alive.

Called a Mekamon, it raced, spider-like across a mat on four pointed legs, trotting daintily before bowing, and performing a dramatic death shudder. Adekunle took out his phone and pointed it at the Mekamon, and now on his screen it was surrounded by glowing lights, facing an animated opponent that it could shoot lasers at.

Apple’s head of developer relations, Ron Okamoto, carefully surveyed the other robots, then peppered Adekunle with questions about motors and articulations. “It’s got character,” he noted.

Their expected 15-minute chat went on for more than an hour. At the end, Okamoto said the words every young entrepreneur with a team of just nine staff dreams of hearing: “You need to come spend some time with us in Cupertino.”

A year on and Adekunle, who at 26 is part of Forbes’ latest 30 Under 30 list for European Technology launched this week, is on course to sell plenty of Mekamon robots thanks to an exclusive distribution deal he signed in November 2017 after “spending some time in Cupertino” and meeting Apple’s retail executives.

Impressed by the quality of his robots and their ability to show emotion with subtly-calibrated movements, Apple priced his four-legged “battle-bots” at $300 and has put them in nearly all of its stores in the United States and Britain. Early customers skew towards male techies but a growing number of parents are buying the robots for their children to get them interested in STEM, Adekunle says.

What he hadn’t known during that first meeting at the Four Seasons: Apple was about to launch ARKit, its very first platform for augmented reality. AR is the cutting-edge technology that mixes digital animations with the real world, on smartphones screens, popularized by Pokemon Go and expected to go further with the face and object tracking technology in Apple’s latest iPhone X. With no track record, Adekunle and his team of nine were suddenly working with the world’s biggest brand.

Robots might already be changing the nature of warehouse management for retailers and other industries, but they’re gradually making their way into our homes too, helped along by people’s increasing ease with artificial entities like Amazon’s Alexa. Adekunle hopes his anthropomorphised robot-spiders will be as popular, say, as iRobot’s Roomba vacuum cleaner.

Trends point that way. Analysts at IDC predict that in three years, the market for consumer robots will have doubled. Next generation AI robots will be focused less on physical tasks, and more on teaching and interacting with family members, says Jing Bing Zhang, the research director of IDC Worldwide Robotics.

The MekaMon Delta Unit

On a more recent Monday morning in January 2018, Adekunle is standing in the corridor of Reach Robotics in Bristol, UK, and staring at a hunk of wires and melted plastic. It’s the first prototype he made in his college dorm room, and is on display as a reminder of how far his startup has come.

“Some of my fingerprints are still on there” he says. “It’s ugly as hell.”

The Mekamon today looks a lot more cutting edge, something like a cross between a crab and a spider, but unlike either of those animals, it has no features resembling a pair of eyes or mouth.

There’s a reason for that.

“When I went into robotics I really loved motion,” says Adekunle. “People are used to clunky robots, and when you make it appear to be realistic, people either love it or they’re freaked out.”

Playing an augmented reality game with the MekaMon Mekacademy unit.

Adekunle decided that his robots could use motion to get an emotional reaction from humans. “I love motion,” he says. Back when he was first tinkering on a robotic prototype while he was at the University of West England, around 2012, several robotic pets like the Sony Aibo robot dog were already hitting the market. The trouble was that most of these personified gadgets cost a fortune. Sony’s latest version of the Aibo costs $1,700 plus a subscription fee.

More from Parmy Olson/Forbes

Posted by The NON-Conformist

Here’s one of the people to thank for your GPS

Leave a comment

gladys west

In a Jan. 19, 2018 photo, Gladys West and her husband Ira West stand in their home in King George, Va. West was part of the team that developed the Global Positioning System in the 1950s and 1960s. (Mike Morones/The Free Lance-Star via AP)

Gladys West was putting together a short bio about herself for a sorority function that recognized senior members of the group.

She noted her 42-year career at the Navy base at Dahlgren and devoted one short-and-sweet line to the fact she was part of the team that developed the Global Positioning System in the 1950s and 1960s.

Fellow Alpha Kappa Alpha Sorority member Gwen James was blown away by the statement. The two had known each other for more than 15 years, and James had no idea that the soft-spoken and sharp-minded West played such a “pivotal role” in a technology that’s become a household word.

“GPS has changed the lives of everyone forever,” James said. “There is not a segment of this global society — military, auto industry, cell phone industry, social media, parents, NASA, etc. — that does not utilize the Global Positioning System.”

The revelation that her 87-year-old sorority sister was one of the “Hidden Figures” behind GPS motivated James to share it with the world.

“I think her story is amazing,” James added.

West, who lives in King George County, admits she had no idea at the time — when she was recording satellite locations and doing accompanying calculations — that her work would affect so many.“When you’re working every day, you’re not thinking, ‘What impact is this going to have on the world?’ You’re thinking, ‘I’ve got to get this right.’ ”

And get it right she did, according to those who worked with her or heard about her.

In a 2017 message about Black History Month, Capt. Godfrey Weekes, then-commanding officer at the Naval Surface Warfare Center Dahlgren Division, described the “integral role” played by West.

“She rose through the ranks, worked on the satellite geodesy (science that measures the size and shape of Earth) and contributed to the accuracy of GPS and the measurement of satellite data,” he wrote. “As Gladys West started her career as a mathematician at Dahlgren in 1956, she likely had no idea that her work would impact the world for decades to come.”

As a girl growing up in Dinwiddie County south of Richmond, all Gladys Mae Brown knew was that she didn’t want to work in the fields, picking tobacco, corn and cotton, or in a nearby factory, beating tobacco leaves into pieces small enough for cigarettes and pipes, as her parents did.

“I realized I had to get an education to get out,” she said.

When she learned that the valedictorian and salutatorian from her high school would earn a scholarship to Virginia State College (now University), she studied hard and graduated at the top of her class.

She got her free ticket to college, majored in math and taught two years in Sussex County before she went back to school for her master’s degree.

She sought jobs where she could apply her skills and eventually got a call from the Dahlgren base, then known as the Naval Proving Ground and now called Naval Support Facility Dahlgren.

“That’s when life really started,” she said.

She began her career in 1956, the second black woman hired at the base and one of only four black employees. One was a mathematician named Ira West, and the two dated for 18 months before they married in 1957.

“That was a great time to be at the base,” he said. “They were just discovering computers.”

While he spent most of his career developing computer programs for ballistic missiles launched from submarines, her calculations eventually led to satellites.

She collected information from the orbiting machines, focusing on information that helped to determine their exact location as they transmitted from around the world. Data was entered into large scale “super computers” that filled entire rooms, and she worked on computer software that processed geoid heights, or precise surface elevations.

The process that led to GPS is too scientific for a newspaper story, but Gladys West would say it took a lot of work — equations checked and double-checked, along with lots of data collection and analysis. Although she might not have grasped its future usage, she was pleased by the company she kept.

“I was ecstatic,” she said. “I was able to come from Dinwiddie County and be able to work with some of the greatest scientists working on these projects.”

Several times during a recent interview and in written notes made over the years, Gladys West referred to staying true to herself and how she was raised. She knew the data she entered had to be right, and she worked until she was certain of its accuracy.

Ralph Neiman, her department head in 1979, acknowledged those skills in a commendation he recommended for West, project manager for the Seasat radar altimetry project. Launched in 1978, Seasat was the first satellite designed for remote sensing of oceans with synthetic aperture radar.

“This involved planning and executing several highly complex computer algorithms which have to analyze an enormous amount of data,” Neiman wrote. “You have used your knowledge of computer applications to accomplish this in an efficient and timely manner.”

He also commended the many hours she dedicated to the project, beyond the normal work week, and the fact that it had cut the processing time in half, saving the base many thousands of dollars.

Dr. Jim Colvard was technical director — the top civilian position at NSWC Dahlgren — from 1973 to 1980 and knew West as one of his students in a graduate program and as a professional employee.

“She was an excellent student and a respected and productive professional,” he wrote in an email. “Her competence, not her color, defined her.”

West retired from the base in 1998, a year after her husband, and the two celebrated by traveling to New Zealand and Australia.

She was excited about the new stage of her life and all the things she might get into. She’d been taking one course at a time toward her doctorate in philosophy from Virginia Tech and was ready for the last step, to write her dissertation.

“However, the Almighty apparently had other plans for me,” she said.

Five months after retirement, West had a stroke that impaired her hearing and vision, balance and use of her right side. She was feeling pretty sorry for herself until “all of a sudden, these words came into my head: ‘You can’t stay in the bed, you’ve got to get up from here and get your Ph.D.’ ”

West did just that.

She and her husband started taking classes at the King George YMCA to rebuild her strength and recover the mobility she’d lost in the stroke. She had to have a quadruple bypass later, then dealt with breast cancer in 2011.

The two continue to attend five exercise classes a week, and both are going strong. He ran a half-marathon six years ago, at age 80, and she’s in the midst of writing her memoirs.

“Gladys and Ira are two of the finest people I’ve ever known,” said Cindy Miller, a King George resident and former technical writer at Dahlgren. “They’re just good, solid-to-the-core, God-fearing people.”

As for the GPS, the Wests use it when they travel, although she still prefers to refer to a paper map. That perplexes Carolyn Oglesby, the couple’s oldest daughter. The Wests have three children and seven grandchildren.

“I asked her why she didn’t just use the Garmin (GPS) since she knows the equations that she helped write are correct,” Oglesby said. “She says the data points could be wrong or outdated so she has to have that map.”

Gladys West is still doing her own calculations.

From NavyTimes

Posted by The NON-Conformist

Tackling the Internet’s Central Villain: The Advertising Business

Leave a comment

Pretend you are the lead detective on a hit new show, “CSI: Terrible Stuff on the Internet.” In the first episode, you set up one of those crazy walls plastered with headlines and headshots, looking for hidden connections between everything awful that’s been happening online recently.

There’s a lot of dark stuff. In one corner, you have the Russian campaign to influence the 2016 presidential election with digital propaganda. In another, a rash of repugnant videos on YouTube, with children being mock-abused, cartoon characters bizarrely committing suicide on the kids’ channel, and a popular vlogger recording a body hanging from a tree.

Then there’s tech “addiction,” the rising worry that adults and kids are getting hooked on smartphones and social networks despite our best efforts to resist the constant desire for a fix. And all over the internet, general fakery abounds — there are millions of fake followers on Twitter and Facebook, fake rehab centers being touted on Google, and even fake review sites to sell you a mattress.

So who is the central villain in this story, the driving force behind much of the chaos and disrepute online?

This isn’t that hard. You don’t need a crazy wall to figure it out, because the force to blame has been quietly shaping the contours of life online since just about the beginning of life online: It’s the advertising business, stupid.

Ads are the lifeblood of the internet, the source of funding for just about everything you read, watch and hear online. The digital ad business is in many ways a miracle machine — it corrals and transforms latent attention into real money that pays for many truly useful inventions, from search to instant translation to video hosting to global mapping.

But the online ad machine is also a vast, opaque and dizzyingly complex contraption with underappreciated capacity for misuse — one that collects and constantly profiles data about our behavior, creates incentives to monetize our most private desires, and frequently unleashes loopholes that the shadiest of people are only too happy to exploit.

And for all its power, the digital ad business has long been under-regulated and under-policed, both by the companies who run it and by the world’s governments. In the United States, the industry has been almost untouched by oversight, even though it forms the primary revenue stream of two of the planet’s most valuable companies, Google and Facebook.

“In the early days of online media, the choice was essentially made — give it away for free, and advertising would produce the revenue,” said Randall Rothenberg, the chief executive of the Interactive Advertising Bureau, a trade association that represents companies in the digital ad business. “A lot of the things we see now flow out from that decision.”

Mr. Rothenberg’s organization has long pushed for stronger standards for online advertising. In a speech last year, he implored the industry to “take civic responsibility for our effect on the world.” But he conceded the business was growing and changing too quickly for many to comprehend its excesses and externalities — let alone to fix them.

“Technology has largely been outpacing the ability of individual companies to understand what is actually going on,” he said. “There’s really an unregulated stock market effect to the whole thing.”

Facebook, which reports its earnings on Wednesday, said its advertising principles hold that ads should “be safe and civil,” and it pointed to several steps it has taken to achieve that goal. “We’ve tightened our ad policies, hired more ad reviewers, and created a new team to help detect and prevent abuses,” said Rob Goldman, the company’s vice president of advertising. “We’re also testing a tool that will bring more transparency to ads running on our platform. We know there is more work to do, but our goal is to keep people safe.”

A spokesman for Google, whose parent company, Alphabet, reports earnings on Thursday, said that it is constantly policing its ad system, pointing out recent steps it has taken to address problems arising from the ad business, including several changes to YouTube.

The role of the ad business in much of what’s terrible online was highlighted in a recent report by two think tanks, New America and Harvard’s Shorenstein Center on Media, Politics and Public Policy.

“The central problem of disinformation corrupting American political culture is not Russian spies or a particular social media platform,” two researchers, Dipayan Ghosh and Ben Scott, wrote in the report, titled “Digital Deceit.” “The central problem is that the entire industry is built to leverage sophisticated technology to aggregate user attention and sell advertising.”

The report chronicles just how efficient the online ad business has become at profiling, targeting, and persuading people. That’s good news for the companies that want to market to you — as the online ad machine gets better, marketing gets more efficient and effective, letting companies understand and influence consumer sentiment at a huge scale for little money.

But the same cheap and effective persuasion machine is also available to anyone with nefarious ends. The Internet Research Agency, the troll group at the center of Russian efforts to influence American politics, spent $46,000 on Facebook ads before the 2016 election. That’s not very much — Hillary Clinton and Donald J. Trump’s campaigns spent tens of millions online. And yet the Russian campaign seems to have had enormous reach; Facebook has said the I.R.A.’s messages — both its ads and its unpaid posts — were seen by nearly 150 million Americans.

How the I.R.A. achieved this mass reach likely has something to do with the dynamics of the ad business, which lets advertisers run many experimental posts to hone their messages, and tends to reward posts that spark outrage and engagement — exactly the sort that the Russians were pushing.

Photo

A sample of a Facebook ad that ran around the time of the 2016 presidential election that was eventually linked back to Russian agents.

“You can’t have it both ways,” Mr. Scott said. “Either you have a brilliant technology that permits microtargeting to exactly the people you want to influence at exactly the right time with exactly the right message — or you’re only reaching a small number of people and therefore it couldn’t be influential.”

The consequences of the ad business don’t end at foreign propaganda. Consider all the nutty content recently found on YouTube Kids — not just the child-exploitation clips but also videos that seem to be created in whole or in part by algorithms that are mining the system for what’s popular, then creating endless variations.

Why would anyone do such a thing? For the ad money. One producer of videos that show antics including his children being scared by clowns told BuzzFeed that he had made more than $100,000 in two months from ads on his videos.

YouTube, which is owned by Google, has since pulled down thousands of such disturbing videos; the company said late last year that it’s hiring numerous moderators to police the platform. It also tightened the rules for which producers can make money from its ad system.

Facebook, too, has made several recent fixes. The company has built a new tool — currently being tested in Canada and slated to be rolled out more widely this year — that lets people see the different ads being placed by political pages, a move meant to address I.R.A.-like influence campaigns. It has also fixed holes that allowed advertisers to target campaigns by race and religion. And it recently unveiled a new version of its News Feed that is meant to cut down on passively scrolling through posts — part of Mark Zuckerberg’s professed effort to improve the network even, he has said, at the cost of advertising revenue.

The tinkering continued on Tuesday, when Facebook also said it would ban ads promoting crypto currency schemes, some of which have fallen into scammy territory.

Yet these are all piecemeal efforts. They don’t address the underlying logic of the ad business, which produces endless incentives for gaming the system in ways that Google and Facebook often only discover after the fact. Mr. Rothenberg said this is how regulating advertising is likely to go — a lot of fixes resembling “whack-a-mole.”

Of course, there is the government. You could imagine some regulator imposing stricter standards for who has access to the online ad system, who makes money from it, how it uses private information, and how transparent tech companies must be about it all. But that also seems unlikely; the Honest Ads Act, a proposal to regulate online political ads, has gone nowhere in CongressOne final note: In 2015, Timothy D. Cook, Apple’s chief executive, warned about the dangers of the online ad business, especially its inherent threat to privacy. I wrote a column in which I took Mr. Cook to task — I argued that he had not acknowledged how ad-supported services improved his own company’s devices.

I stand by that view, but now I also regret dismissing his warning so cavalierly. Socially, politically and culturally, the online ad business is far more dangerous than I appreciated. Mr. Cook was right, and we should have listened to him.

By Farhad Manjoo/NYTimes

Posted by The NON-Conformist

Older Entries

%d bloggers like this: