‘Fog computing’ could improve communications during natural disasters

Hurricane Irma at peak intensity near the U.S. Virgin Islands on September 6, 2017 (credit: NOAA)

Researchers at the Georgia Institute of Technology have developed a system that uses edge computing (also known as fog computing) to deal with the loss of internet access in natural disasters such as hurricanes, tornados, and floods.

The idea is to create an ad hoc decentralized network that uses computing power built into mobile phones, routers, and other hardware to provide actionable data to emergency managers and first responders.

In a flooded area, for example, search and rescue personnel could continuously ping enabled phones, surveillance cameras, and “internet of things” devices in an area to determine their exact locations. That data could then be used to create density maps of people to prioritize and guide emergency response teams.

Situational awareness for first responders

“We believe fog computing can become a potent enabler of decentralized, local social sensing services that can operate when internet connectivity is constrained,” said Kishore Ramachandran, PhD, computer science professor at Georgia Tech and senior author of a paper presented in April this year at the 2nd International Workshop on Social Sensing*.

“This capability will provide first responders and others with the level of situational awareness they need to make effective decisions in emergency situations.”

The team has proposed a generic software architecture for social sensing applications that is capable of exploiting the fog-enabled devices. The design has three components: a central management function that resides in the cloud, a data processing element placed in the fog infrastructure, and a sensing component on the user’s device.

Beyond emergency response during natural disasters, the team believes its proposed fog architecture can also benefit communities with limited or no internet access — for public transportation management, job recruitment, and housing, for example.

To monitor far-flung devices in areas with no internet access, a bus or other vehicle could be outfitted with fog-enabled sensing capabilities, the team suggests. As it travels in remote areas, it would collect data from sensing devices. Once in range of internet connectivity, the “data mule” bus would upload that information to centralized cloud-based platforms.

* “Social sensing has emerged as a new paradigm for collecting sensory measurements by means of “crowd-sourcing” sensory data collection tasks to a human population. Humans can act as sensor carriers (e.g., carrying GPS devices that share location data), sensor operators (e.g., taking pictures with smart phones), or as sensors themselves (e.g., sharing their observations on Twitter). The proliferation of sensors in the possession of the average individual, together with the popularity of social networks that allow massive information dissemination, heralds an era of social sensing that brings about new research challenges and opportunities in this emerging field.” — SocialSens2017

Facebook’s internet-beaming drone completes first test flight

(credit: Facebook)

Facebook Connectivity Lab announced today the first full-scale test flight of Aquila — a solar-powered unmanned airplane/drone designed to bring affordable internet access to some of the 1.6 billion people living in remote locations with no access to mobile broadband networks.

When complete, Aquila will be able to circle a region up to 60 miles in diameter, beaming internet connectivity down from an altitude of more than 60,000 feet to people within a 60-mile communications diameter for up to 90 days at a time. It will be part of a future fleet of drones.


Facebook

Facebook’s Secret Conversations

(credit: Facebook)

Facebook began today (Friday, July 8) rolling out a new beta-version feature for Messenger called “Secret Conversations,” allowing for “one-to-one secret conversations … that will be end-to-end encrypted and which can only be read on one device of the person you’re communicating with.”

Facebook suggests the feature will be useful for discussing an illness or sending financial information (as in the pictures above).  You can choose to set a timer to control the length of time each message you send remains visible within the conversation. (Rich content like GIFs, videos, and making payments are not supported.)

The technology, described in a technical whitepaper (open access), is based on the Signal Protocol developed by Open Whisper Systems, which is also used in Open Whisper Systems’ own Signal messaging app (Chrome, iOS, Android),  WhatsApp, and Google’s Allo (not yet launched).

Unlike WhatsApp and iMessage, which automatically encrypt every message, Secret Conversations only works from a single device and is opt-in, which “will likely rankle many privacy advocates,” says Wired .

But not as much as all of these encrypted services rankle law enforcement agencies, since the feature hampers surveillance capabilities, it adds.

 

 


 

 

 

 

How to bring the entire web to VR

Google is working on new features to bring the web to VR, according to Google Happiness Evangelist .

To help web developers embed VR content in their web pages, the Google Chromium team has been working towards WebVR support in Chromium (programmers: see Chromium Code Reviews), Beaufort said. That means you can now use Cardboard- or Daydream-ready VR viewers to see pages with compliant VR content while browsing the web with Chrome.

(credit: Google)

“The team is just getting started on making the web work well for VR so stay tuned, there’s more to come!” he said.

Google previously launched VR view,  which enables developers to embed immersive content on Android, iOS and the web. Users can view it on their phone, with a Cardboard viewer, or with a Chrome browser on their desktop computer.

For native apps, programmers can embed a VR view in an app or web page by grabbing the latest Cardboard SDK for Android or iOS and adding a few lines of code.

On the web, embedding a VR view is as simple as adding an iframe on your site, as KurzweilAI did in the 360-degrees view shown at the top of this page, using iframe code copied from the HTML on Introducing VR view: embed immersive content into your apps and websites on Google Developers Blog. (Chrome browser is required. In addition to a VR viewer, you can use either the mouse or the four arrow keys to explore the image in 360 degrees.)

Your smartphone and tablet may be making you ADHD-like

(credit: KurzweilAI)

Smartphones and other digital technology may be causing ADHD-like symptoms, according to an open-access study published in the proceedings of ACM CHI ’16, the Human-Computer Interaction conference of the Association for Computing Machinery, ongoing in San Jose.

In a two-week experimental study, University of Virginia and University of British Columbia researchers showed that when students kept their phones on ring or vibrate and with notification alerts on, they reported more symptoms of inattention and hyperactivity than when they kept their phones on silent.

The results suggest that even people who have not been diagnosed with ADHD may experience some of the disorder’s symptoms, including distraction, fidgeting, having trouble sitting still, difficulty doing quiet tasks and activities, restlessness, and difficulty focusing and getting bored easily when trying to focus, the researchers said.

“We found the first experimental evidence that smartphone interruptions can cause greater inattention and hyperactivity — symptoms of attention deficit hyperactivity disorder — even in people drawn from a nonclinical population,”said Kostadin Kushlev, a psychology research scientist at the University of Virginia who led the study with colleagues at the University of British Columbia.

In the study, 221 students at the University of British Columbia drawn from the general student population were assigned for one week to maximize phone interruptions by keeping notification alerts on, and their phones within easy reach.

Indirect effects of manipulating smartphone interruptions on psychological well-being via inattention symptoms. Numbers are unstandardized regression coefficients. (credit: Kostadin Kushlev et al./CHI 2016)

During another week participants were assigned to minimize phone interruptions by keeping alerts off and their phones away.

At the end of each week, participants completed questionnaires assessing inattention and hyperactivity. Unsurprisingly, the results showed that the participants experienced significantly higher levels of inattention and hyperactivity when alerts were turned on.

Digital mobile users focus more on concrete details than the big picture

Using digital platforms such as tablets and laptops for reading may also make you more inclined to focus on concrete details rather than interpreting information more contemplatively or abstractly (seeing the big picture), according to another open-access study published in ACM CHI ’16 proceedings.

Researchers at Dartmouth’s Tiltfactor lab and the Human-Computer Interaction Institute at Carnegie Mellon University conducted four studies with a total of 300 participants. Participants were tested by reading a short story and a table of information about fictitious Japanese car models.

The studies revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical printout) exhibited a lower level of “construal” (abstract) thinking. However, the researchers also found that engaging the subjects in a more abstract mindset prior to an information processing task on a digital platform appeared to help facilitate a better performance on tasks that require abstract thinking.

Coping with digital overload

Given the widespread acceptance of digital devices, as evidenced by millions of apps, ubiquitous smartphones, and the distribution of iPads in schools, surprisingly few studies exist about how digital tools affect us, the researchers noted.

“The ever-increasing demands of multitasking, divided attention, and information overload that individuals encounter in their use of digital technologies may cause them to ‘retreat’ to the less cognitively demanding lower end of the concrete-abstract continuum,” according to the authors. They also say the new research suggests that “this tendency may be so well-ingrained that it generalizes to contexts in which those resource demands are not immediately present.”

Their recommendation for human-computer interaction designers and researchers: “Consider strategies for encouraging users to see the ‘forest’ as well as the ‘trees’ when interacting with digital platforms.”

Jony Ive, are you listening?


Abstract of “Silence your phones”: Smartphone notifications increase inattention and hyperactivity symptoms

As smartphones increasingly pervade our daily lives, people are ever more interrupted by alerts and notifications. Using both correlational and experimental methods, we explored whether such interruptions might be causing inattention and hyperactivity-symptoms associated with Attention Deficit Hyperactivity Disorder (ADHD) even in people not clinically diagnosed with ADHD. We recruited a sample of 221 participants from the general population. For one week, participants were assigned to maximize phone interruptions by keeping notification alerts on and their phones within their reach/sight. During another week, participants were assigned to minimize phone interruptions by keeping alerts off and their phones away. Participants reported higher levels of inattention and hyperactivity when alerts were on than when alerts were off. Higher levels of inattention in turn predicted lower productivity and psychological well-being. These findings highlight some of the costs of ubiquitous connectivity and suggest how people can reduce these costs simply by adjusting existing phone settings.


Abstract of High-Low Split: Divergent Cognitive Construal Levels Triggered by Digital and Non-digital Platforms

The present research investigated whether digital and non-digital platforms activate differing default levels of cognitive construal. Two initial randomized experiments revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical print-out) exhibited a lower level of construal, one prioritizing immediate, concrete details over abstract, decontextualized interpretations. This pattern emerged both in digital platform participants’ greater preference for concrete versus abstract descriptions of behaviors as well as superior performance on detail-focused items (and inferior performance on inference-focused items) on a reading comprehension assessment. A pair of final studies found that the likelihood of correctly solving a problem-solving task requiring higher-level “gist” processing was: (1) higher for participants who processed the information for task on a non-digital versus digital platform and (2) heightened for digital platform participants who had first completed an activity activating an abstract mindset, compared to (equivalent) performance levels exhibited by participants who had either completed no prior activity or completed an activity activating a concrete mindset.

(credit: KurzweilAI/Apple)

What happens when drones and people sync their vision?

Multiple recon drones in the sky all suddenly aim their cameras at a person of interest on the ground, synced to what persons on the ground see …

That could be a reality soon, thanks to an agreement just announced by the mysterious SICdrone, an unmanned aircraft system manufacturer, and CrowdOptic, an “interactive streaming platform that connects the world through smart devices.”

A CrowdOptic “cluster” — multiple people focused on the same object.  (credit: CrowdOptic)

CrowdOptic’s technology lets a “cluster” (multiple people or objects) point their cameras or smartphones at the same thing (say, at a concert or sporting event), with different views, allowing for group chat or sharing content.

Drone air control

For SICdrone, the idea is to use CrowdOptic tech to automatically orchestrate the drones’ onboard cameras to track and capture multiple camera angles (and views) of a single point of interest.* Beyond that, this tech could provide vital flight-navigation systems to coordinate multiple drones without having them conflict (or crash), says CrowdOptic CEO Jon Fisher.

This disruptive innovation might become essential (and mandated by law?) as AmazonFlirtey, and others compete to dominate drone delivery. It could also possibly help with the growing concern about drone risk to airplanes.**

Other current (and possible) users of CrowdOptics tech include first responders, news and sports reporting, advertising analytics (seeing what people focus on), linking up augmented-reality and VR headset users, and “social TV” (live attendees — using the Periscope app, for example — provide the most interesting video to people watching at home), Fisher explained to KurzweilAI.

* This uses several CrowdOptic patents (U.S. Patents 8,527,340, 9,020,832, and 9,264,474).

** Drone Comes Within 200 Feet Of Passenger Jet Coming In To Land At LAX

Can human-machine superintelligence solve the world’s most dire problems?


Human Computation Institute | Dr. Pietro Michelucci

“Human computation” — combining human and computer intelligence in crowd-powered systems — might be what we need to solve the “wicked” problems of the world, such as climate change and geopolitical conflict, say researchers from the Human Computation Institute (HCI) and Cornell University.

In an article published in the journal Science, the authors present a new vision of human computation that takes on hard problems that until recently have remained out of reach.

Humans surpass machines at many things, ranging from visual pattern recognition to creative abstraction. And with the help of computers, these cognitive abilities can be effectively combined into multidimensional collaborative networks that achieve what traditional problem-solving cannot, the authors say.

Microtasking

Microtasking: Crowdsourcing breaks large tasks down into microtasks, which can be things at which humans excel, like classifying images. The microtasks are delivered to a large crowd via a user-friendly interface, and the data are aggregated for further processing. (credit: Pietro Michelucci and Janis L. Dickinson/Science)

Most of today’s human-computation systems rely on “microtasking” — sending “micro-tasks” to many individuals and then stitching together the results. For example, 165,000 volunteers in EyeWire have analyzed thousands of images online to help build the world’s most complete map of human retinal neurons.

Another example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human.

“Microtasking is well suited to problems that can be addressed by repeatedly applying the same simple process to each part of a larger data set, such as stitching together photographs contributed by residents to decide where to drop water during a forest fire,” the authors note.

But this microtasking approach alone cannot address the tough challenges we face today, say the authors. “A radically new approach is needed to solve ‘wicked problems’ — those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences, such as climate change, disease, and geopolitical conflict, which are dynamic, involve multiple, interacting systems, and have non-obvious secondary effects, such as political exploitation of a pandemic crisis.”

New human-computation technologies

New human-computation technologies: In creating problem-solving ecosystems, researchers are beginning to explore how to combine the cognitive processing of many human contributors with machine-based computing to build faithful models of the complex, interdependent systems that underlie the world’s most challenging problems. (credit: Pietro Michelucci and Janis L. Dickinson/Science)

The authors say new human computation technologies can help build flexible collaborative environments. Recent techniques provide real-time access to crowd-based inputs, where individual contributions can be processed by a computer and sent to the next person for improvement or analysis of a different kind.

This idea is already taking shape in several human-computation projects:

  • YardMap.org, launched by the Cornell in 2012, maps global conservation efforts. It allows participants to interact and build on each other’s work — something that crowdsourcing alone cannot achieve.
  • WeCureAlz.com accelerates Cornell-based Alzheimer’s disease research by combining two successful microtasking systems into an interactive analytic pipeline that builds blood-flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system.

“By enabling members of the general public to play some simple online game, we expect to reduce the time to treatment discovery from decades to just a few years,” says HCI director and lead author, Pietro Michelucci, PhD. “This gives an opportunity for anyone, including the tech-savvy generation of caregivers and early stage AD patients, to take the matter into their own hands.”


Abstract of The power of crowds

Human computation, a term introduced by Luis von Ahn, refers to distributed systems that combine the strengths of humans and computers to accomplish tasks that neither can do alone. The seminal example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human. This free cognitive labor provides users with access to Web content and keeps websites safe from spam attacks, while feeding into a massive, crowd-powered transcription engine that has digitized 13 million articles from The New York Times archives. But perhaps the best known example of human computation is Wikipedia. Despite initial concerns about accuracy, it has become the key resource for all kinds of basic information. Information science has begun to build on these early successes, demonstrating the potential to evolve human computation systems that can model and address wicked problems (those that defy traditional problem-solving methods) at the intersection of economic, environmental, and sociopolitical systems.

Social-media news consumers at higher risk of ‘information bubbles’

Each circle is proportional to the number of clicks to a website from a single user (a, b) or a group of users (b, d) referred by search engines (a, c) vs. social media (b, d). Social media concentrate clicks to fewer sources, as shown by the larger circles. (credit: Dimitar Nikolov)

Do you find your news and information from social media instead of search engines? If so, you are at risk of becoming trapped in a “collective social bubble.”

That’s according to Indiana University researchers in a study, “Measuring online social bubbles,” recently published in the new open-access online journal PeerJ Computer Science, based on an analysis of more than 100 million Web clicks and 1.3 billion public posts on social media*.

“These findings provide the first large-scale empirical comparison between the diversity of information sources reached through different types of online activity,” said Dimitar Nikolov, a doctoral student in the School of Informatics and Computing at Indiana University (IU), lead author of the study.

Collective social bubble

“Our analysis shows that people collectively access information from a significantly narrower range of sources on social media compared to search engines.”

To measure the diversity of information accessed over each medium, the researchers developed a method that assigned a score for how user clicks from social media versus search engines were distributed across millions of sites.

A lower score indicated users’ Web traffic concentrated on fewer sites; a higher score indicated traffic scattered across more sites. A single click on CNN and nine clicks on MSNBC, for example, would generate a lower score than five clicks on each site.

Overall, the analysis found that people who accessed news on social media scored significantly lower in terms of the diversity of their information sources than users who accessed current information using search engines.

The results show the rise of a “collective social bubble” where news is shared within communities of like-minded individuals, said Nikolov, noting a trend in modern media consumption where “the discovery of information is being transformed from an individual to a social endeavor.”

How “friends” limit your sphere of information

Nikolov noted that people who adopt this behavior as a coping mechanism for “information overload” may not even be aware they’re filtering their access to information by using social media platforms, such as Facebook, where the majority of news stories originate from friends’ postings.

“The rapid adoption of the Web as both a source of knowledge and social space has made it ever more difficult for people to manage the constant stream of news and information arriving on their screens,” added study co-author Filippo Menczer, professor of informatics and computing, director of the Center for Complex Networks and Systems Research. “These results suggest the conflation of these previously distinct activities may be contributing to a growing ‘bubble effect’ in information consumption.”

“Compared to a baseline of information-seeking activities, this evidence shows, empirically, that social media does in fact expose communities and individuals to a significantly narrower range of news sources, despite the many information channels on the medium,” Nikolov said.

It would also be interesting to see how social media as sources compare to news publications, and how social media may make users more vulnerable to propaganda and other forms of information and opinion control.

* IU scientists applied their analysis to three massive sources of information on browsing habits. An anonymous database compiled by the researchers, contained the Web searches of 100,000 users at IU between October 2006 and May 2010 (the primary source). Two other datasets contained identifiers, enabling the scientists to confirm that information access behavior at the community level reflected the behavior of individual users: a dataset containing 18 million clicks by more than half a million users of the AOL search engine in 2006; and 1.3 billion public posts containing links shared by over 89 million people on Twitter between April 2013 and April 2014. To measure the range of news sources accessed by users, the IU scientists used an open directory of news sites, filtering out blogs and wikis, resulting in 3,500 news outlets.


Abstract of Measuring online social bubbles

Social media have become a prevalent channel to access information, spread ideas, and influence opinions. However, it has been suggested that social and algorithmic filtering may cause exposure to less diverse points of view. Here we quantitatively measure this kind of social bias at the collective level by mining a massive datasets of web clicks. Our analysis shows that collectively, people access information from a significantly narrower spectrum of sources through social media and email, compared to a search baseline. The significance of this finding for individual exposure is revealed by investigating the relationship between the diversity of information sources experienced by users at both the collective and individual levels in two datasets where individual users can be analyzed—Twitter posts and search logs. There is a strong correlation between collective and individual diversity, supporting the notion that when we use social media we find ourselves inside “social bubbles.” Our results could lead to a deeper understanding of how technology biases our exposure to new information.

Semantic Scholar uses AI to transform scientific search

Example of the top return in a Semantic Scholar search for “quantum computer silicon” constrained to overviews (52 out of 1,397 selected papers since 1989) (credit: AI2)

The Allen Institute for Artificial Intelligence (AI2) launched Monday (Nov. 2) its free Semantic Scholar service, intended to allow scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work.

Semantic Scholar leverages AI2’s expertise in data mining, natural-language processing, and computer vision, according to according to Oren Etzioni, PhD, CEO at AI2. At launch, the system searches more than three million computer science papers, and will add scientific categories on an ongoing basis.

With Semantic Scholar, computer scientists can:

  • Home in quickly on what they are looking for, with advanced selection filtering tools. Researchers can filter search results by author, publication, topic, and date published. This gets the researcher to the most relevant result in the fastest way possible, and reduces information overload.
  • Instantly access a paper’s figures and findings. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
  • Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
  • Be prompted with key phrases within each paper to winnow the search further.

Example of figures and tables extracted from the first document discovered (“Quantum computation and quantum information”) in the search above (credit: AI2)

How Semantic Scholar works

Using machine reading and vision methods, Semantic Scholar crawls the web, finding all PDFs of publicly available scientific papers on computer science topics, extracting both text and diagrams/captions, and indexing it all for future contextual retrieval.

Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of paper and how influential its citations are. It provides the scientist with a simple user interface (optimized for mobile) that maps to academic researchers’ expectations.

Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.

Semantic Scholar builds from the foundation of other research-paper search applications such as Google Scholar, adding AI methods to overcome information overload.

“Semantic Scholar is a first step toward AI-based discovery engines that will be able to connect the dots between disparate studies to identify novel hypotheses and suggest experiments that would otherwise be missed,” said Etzione. “Our goal is to enable researchers to find answers to some of science’s thorniest problems.”