Category Archives: science

Research finds heavy Facebook users make impaired decisions like drug addicts

Researchers at Michigan State University are exploring the idea that there’s more to “social media addiction” than casual joking about being too online might suggest. Their paper, titled “Excessive social media users demonstrate impaired decision making in the Iowa Gambling Task” (Meshi, Elizarova, Bender and Verdejo-Garcia) and published in the Journal of Behavioral Addictions, indicates that people who use social media sites heavily actually display some of the behavioral hallmarks of someone addicted to cocaine or heroin.

The study asked 71 participants to first rate their own Facebook usage with a measure known as the Bergen Facebook Addiction Scale. The study subjects then went on to complete something called the Iowa Gambling Task (IGT), a classic research tool that evaluates impaired decision making. The IGT presents participants with four virtual decks of cards associated with rewards or punishments and asks them to choose cards from the decks to maximize their virtual winnings. As the study explains, “Participants are also informed that some decks are better than others and that if they want to do well, they should avoid the bad decks and choose cards from the good decks.”

What the researchers found was telling. Study participants who self-reported as excessive Facebook users actually performed worse than their peers on the IGT, frequenting the two “bad” decks that offer immediate gains but ultimate result in losses. That difference in behavior was statistically significant in the latter portion of the IGT, when a participant has had ample time to observe the deck’s patterns and knows which decks present the greatest risk.

The IGT has been used to study everything from patients with frontal lobe brain injuries to heroin addicts, but using it as a measure to examine social media addicts is novel. Along with deeper, structural research, it’s clear that researchers can apply to social media users much of the existing methodological framework for learning about substance addiction.

The study is narrow, but interesting, and offers a few paths for follow-up research. As the researchers recognize, in an ideal study, the researchers could actually observe participants’ social media usage and sort them into categories of high or low social media usage based on behavior rather than a survey they fill out.

Future research could also delve more deeply into excessive users across different social networks. The study only looked at Facebook use, “because it is currently the most widely used [social network] around the world,” but one could expect to see similar results with the billion-plus monthly Instagram and potentially the substantially smaller portion of people on Twitter.

Ultimately, we know that social media is shifting human behavior and potentially its neurological underpinnings, we just don’t know the extent of it — yet. Due to the methodical nature of behavioral research and the often extremely protracted process of publishing it, we likely won’t know for years to come the results of studies conducted now. Still, as this study proves, there are researchers at work examining how social media is impacting our brains and our behavior — we just might not be able to see the big picture for some time.

This box sucks pure water out of dry desert air

For many of us, clean, drinkable water comes right out the tap. But for billions it’s not that simple, and all over the world researchers are looking into ways to fix that. Today brings work from Berkeley, where a team is working on a water-harvesting apparatus that requires no power and can produce water even in the dry air of the desert. Hey, if a cactus can do it, why can’t we?

While there are numerous methods for collecting water from the air, many require power or parts that need to be replaced, what professor Omar Yaghi has developed needs neither.

The secret isn’t some clever solar concentrator or low-friction fan — it’s all about the materials. Yaghi is a chemist, and has created what’s called a metal-organic framework, or MOF, that’s eager both to absorb and release water.

It’s essentially a powder made of tiny crystals in which water molecules get caught as the temperature decreases. Then, when the temperature increases again, the water is released into the air again.

Yaghi demonstrated the process on a small scale last year, but now he and his team have published the results of a larger field test producing real-world amounts of water.

They put together a box about two feet per side with a layer of MOF on top that sits exposed to the air. Every night the temperature drops and the humidity rises, and water is trapped inside the MOF; in the morning, the sun’s heat drives the water from the powder, and it condenses on the box’s sides, kept cool by a sort of hat. The result of a night’s work: 3 ounces of water per pound of MOF used.

That’s not much more than a few sips, but improvements are already on the way. Currently the MOF uses zicronium, but an aluminum-based MOF, already being tested in the lab, will cost 99 percent less and produce twice as much water.

With the new powder and a handful of boxes, a person’s drinking needs are met without using any power or consumable material. Add a mechanism that harvests and stores the water and you’ve got yourself an off-grid potable water solution going.

“There is nothing like this,” Yaghi explained in a Berkeley news release. “It operates at ambient temperature with ambient sunlight, and with no additional energy input you can collect water in the desert. The aluminum MOF is making this practical for water production, because it is cheap.”

He says that there are already commercial products in development. More tests, with mechanical improvements and including the new MOF, are planned for the hottest months of the summer.

This box sucks pure water out of dry desert air

For many of us, clean, drinkable water comes right out the tap. But for billions it’s not that simple, and all over the world researchers are looking into ways to fix that. Today brings work from Berkeley, where a team is working on a water-harvesting apparatus that requires no power and can produce water even in the dry air of the desert. Hey, if a cactus can do it, why can’t we?

While there are numerous methods for collecting water from the air, many require power or parts that need to be replaced, what professor Omar Yaghi has developed needs neither.

The secret isn’t some clever solar concentrator or low-friction fan — it’s all about the materials. Yaghi is a chemist, and has created what’s called a metal-organic framework, or MOF, that’s eager both to absorb and release water.

It’s essentially a powder made of tiny crystals in which water molecules get caught as the temperature decreases. Then, when the temperature increases again, the water is released into the air again.

Yaghi demonstrated the process on a small scale last year, but now he and his team have published the results of a larger field test producing real-world amounts of water.

They put together a box about two feet per side with a layer of MOF on top that sits exposed to the air. Every night the temperature drops and the humidity rises, and water is trapped inside the MOF; in the morning, the sun’s heat drives the water from the powder, and it condenses on the box’s sides, kept cool by a sort of hat. The result of a night’s work: 3 ounces of water per pound of MOF used.

That’s not much more than a few sips, but improvements are already on the way. Currently the MOF uses zicronium, but an aluminum-based MOF, already being tested in the lab, will cost 99 percent less and produce twice as much water.

With the new powder and a handful of boxes, a person’s drinking needs are met without using any power or consumable material. Add a mechanism that harvests and stores the water and you’ve got yourself an off-grid potable water solution going.

“There is nothing like this,” Yaghi explained in a Berkeley news release. “It operates at ambient temperature with ambient sunlight, and with no additional energy input you can collect water in the desert. The aluminum MOF is making this practical for water production, because it is cheap.”

He says that there are already commercial products in development. More tests, with mechanical improvements and including the new MOF, are planned for the hottest months of the summer.

Forget DeepFakes, Deep Video Portraits are way better (and worse)

The strange, creepy world of “deepfakes,” videos (often explicit) with the faces of the subjects replaced by those of celebrities, set off alarm bells just about everywhere early this year. And in case you thought that sort of thing had gone away because people found it unethical or unconvincing, the practice is back with the highly convincing “Deep Video Portraits,” which refines and improves the technique.

To be clear, I don’t want to conflate this interesting research with the loathsome practice of putting celebrity faces on adult film star bodies. They’re also totally different implementations of deep learning-based image manipulation. But this application of technology is clearly here to stay and it’s only going to get better — so we had best keep pace with it so we don’t get taken by surprise.

Deep Video Portraits is the title of a paper submitted for consideration this August at SIGGRAPH; it describes an improved technique for reproducing the motions, facial expressions, and speech movements of one person using the face of another. Here’s a mild example:

What’s special about this technique is how comprehensive it is. It uses a video of a target person, in this case President Obama, to get a handle on what constitutes the face, eyebrows, corners of the mouth, background, and so on, and how they move normally.

Then, by carefully tracking those same landmarks on a source video it can make the necessary distortions to the President’s face, using their own motions and expressions as sources for that visual information.

So not only does the body and face move like the source video, but every little nuance of expression is captured and reproduced using the target person’s own expressions! If you look closely, even the shadows behind the person (if present) are accurate.

The researchers verified the effectiveness of this by comparing video of a person actually saying something on video with what the deep learning network produced using that same video as a source. “Our results are nearly indistinguishable from the real video,” says one of the researchers. And it’s true.

So, while you could use this to make video of anyone who’s appeared on camera appear to say whatever you want them to say — in your voice, it should be mentioned — there are practical applications as well. The video shows how dubbing a voice for a movie or show could be improved by syncing the character’s expression properly with the voice actor.

There’s no way to make a person do something or make an expression that’s too far from what they do on camera, though. For instance, the system can’t synthesize a big grin if the person is looking sour the whole time (though it might try and fail hilariously). And naturally there are all kinds of little bugs and artifacts. So for now the hijinks are limited.

But as you can see from the comparison with previous attempts at doing this, the science is advancing at a rapid pace. The differences between last year’s models and this years are clearly noticeable, and 2019’s will be more advanced still. I told you all this would happen back when that viral video of the eagle picking up the kid was making the rounds.

“I’m aware of the ethical implications,” coauthor Justus Theis told The Register. “That is also a reason why we published our results. I think it is important that the people get to know the possibilities of manipulation techniques.”

If you’ve ever thought about starting a video forensics company, now might be the time. Perhaps a deep learning system to detect deep learning-based image manipulation is just the ticket.

The paper describing Deep Video Portraits, from researchers at Technicolor, Stanford, the University of Bath, the Max Planck Institute for Informatics, and the Technical University of Munich, is available for you to read here on Arxiv.

Watch a hard-working robot improvise to climb drawers and cross gaps

A robot’s got to know its limitations. But that doesn’t mean it has to accept them. This one in particular uses tools to expand its capabilities, commandeering nearby items to construct ramps and bridges. It’s satisfying to watch but, of course, also a little worrying.

This research, from Cornell and the University of Pennsylvania, is essentially about making a robot take stock of its surroundings and recognize something it can use to accomplish a task that it knows it can’t do on its own. It’s actually more like a team of robots, since the parts can detach from one another and accomplish things on their own. But you didn’t come here to debate the multiplicity or unity of modular robotic systems! That’s for the folks at the IEEE International Conference on Robotics and Automation, where this paper was presented (and Spectrum got the first look).

SMORES-EP is the robot in play here, and the researchers have given it a specific breadth of knowledge. It knows how to navigate its environment, but also how to inspect it with its little mast-cam and from that inspection derive meaningful data like whether an object can be rolled over, or a gap can be crossed.

It also knows how to interact with certain objects, and what they do; for instance, it can use its built-in magnets to pull open a drawer, and it knows that a ramp can be used to roll up to an object of a given height or lower.

A high-level planning system directs the robots/robot-parts based on knowledge that isn’t critical for any single part to know. For example, given the instruction to find out what’s in a drawer, the planner understands that to accomplish that, the drawer needs to be open; for it to be open, a magnet-bot will have to attach to it from this or that angle, and so on. And if something else is necessary, for example a ramp, it will direct that to be placed as well.

The experiment shown in this video has the robot system demonstrating how this could work in a situation where the robot must accomplish a high-level task using this limited but surprisingly complex body of knowledge.

In the video, the robot is told to check the drawers for certain objects. In the first drawer, the target objects aren’t present, so it must inspect the next one up. But it’s too high — so it needs to get on top of the first drawer, which luckily for the robot is full of books and constitutes a ledge. The planner sees that a ramp block is nearby and orders it to be put in place, and then part of the robot detaches to climb up and open the drawer, while the other part maneuvers into place to check the contents. Target found!

In the next task, it must cross a gap between two desks. Fortunately, someone left the parts of a bridge just lying around. The robot puts the bridge together, places it in position after checking the scene, and sends its forward half rolling towards the goal.

These cases may seem rather staged, but this isn’t about the robot itself and its ability to tell what would make a good bridge. That comes later. The idea is to create systems that logically approach real-world situations based on real-world data and solve them using real-world objects. Being able to construct a bridge from scratch is nice, but unless you know what a bridge is for, when and how it should be applied, where it should be carried and how to get over it, and so on, it’s just a part in search of a whole.

Likewise, many a robot with a perfectly good drawer-pulling hand will have no idea that you need to open a drawer before you can tell what’s in it, or that maybe you should check other drawers if the first doesn’t have what you’re looking for!

Such basic problem-solving is something we take for granted, but nothing can be taken for granted when it comes to robot brains. Even in the experiment described above, the robot failed multiple times for multiple reasons while attempting to accomplish its goals. That’s okay — we all have a little room to improve.

Teens dump Facebook for YouTube, Instagram and Snapchat

A Pew survey of teens and the ways they use technology finds that kids have largely ditched Facebook for the visually stimulating alternatives of Snapchat, YouTube, and Instagram. Nearly half said they’re online “almost constantly,” which will probably be used as a source of FUD, but really is just fine. Even teens, bless their honest little hearts, have doubts about whether social media is good or evil.

The survey is the first by Pew since 2015, and plenty has changed. The one that has driven the most change seems to be the ubiquity and power of smartphones, which 95 percent of respondents said they had access to. Fewer, especially among lower income families, had laptops and desktops.

This mobile-native cohort has opted for mobile-native content and apps, which means highly visual and easily browsable. That’s much more the style on the top three apps: YouTube takes first place with 85 percent reporting they use it, then Instagram at 72 percent, and Snapchat at 69.

Facebook, at 51 percent, is a far cry from the 71 percent who used it back in 2015, when it was top of the heap by far. Interestingly, the 51 percent average is not representative of any of the income groups polled; 36 percent of higher income households used it, while 70 percent of teens from lower income households did.

What could account for this divergence? The latest and greatest hardware isn’t required to run the top three apps, nor (necessarily) an expensive data plan. With no data to go on from the surveys and no teens nearby to ask, I’ll leave this to the professionals to look into. No doubt Facebook will be interested to learn this — though who am I kidding, it probably knows already. (There’s even a teen tutorial.)

Twice as many teens reported being “online constantly,” but really, it’s hard to say when any of us is truly “offline.” Teens aren’t literally looking at their phones all day, much as that may seem to be the case, but they — and the rest of us — are rarely more than a second or two away from checking messages, looking something up, and so on. I’m surprised the “constantly” number isn’t higher, honestly.

Gaming is still dominated by males, almost all of whom play in some fashion, but 83 percent of teen girls also said they gamed, so the gap is closing.

When asked whether social media had a positive or negative effect, teens were split. They valued it for connecting with friends and family, finding news and information, and meeting new people. But they decried its use in bullying and spreading rumors, its complicated effect on in-person relationships, and how it distracts from and distorts real life.

Here are some quotes from real teens demonstrating real insight.

Those who feel it has an overall positive effect:

  • “I feel that social media can make people my age feel less lonely or alone. It creates a space where you can interact with people.”
  • “My mom had to get a ride to the library to get what I have in my hand all the time. She reminds me of that a lot.”
  • “We can connect easier with people from different places and we are more likely to ask for help through social media which can save people.”
  • “It has given many kids my age an outlet to express their opinions and emotions, and connect with people who feel the same way.”

And those who feel it’s negative:

  • “People can say whatever they want with anonymity and I think that has a negative impact.”
  • “Gives people a bigger audience to speak and teach hate and belittle each other.”
  • “It makes it harder for people to socialize in real life, because they become accustomed to not interacting with people in person.”
  • “Because teens are killing people all because of the things they see on social media or because of the things that happened on social media.”

That last one is scary.

You can read the rest of the report and scrutinize Pew’s methodology here.

HoloLens acts as eyes for blind users and guides them with audio prompts

Microsoft’s HoloLens has an impressive ability to quickly sense its surroundings, but limiting it to displaying emails or game characters on them would show a lack of creativity. New research shows that it works quite well as a visual prosthesis for the vision impaired, not relaying actual visual data but guiding them in real time with audio cues and instructions.

The researchers, from Caltech and University of Southern California, first argue that restoring vision is at present simply not a realistic goal, but that replacing the perception portion of vision isn’t necessary to replicate the practical portion. After all, if you can tell where a chair is, you don’t need to see it to avoid it, right?

Crunching visual data and producing a map of high-level features like walls, obstacles and doors is one of the core capabilities of the HoloLens, so the team decided to let it do its thing and recreate the environment for the user from these extracted features.

They designed the system around sound, naturally. Every major object and feature can tell the user where it is, either via voice or sound. Walls, for instance, hiss (presumably a white noise, not a snake hiss) as the user approaches them. And the user can scan the scene, with objects announcing themselves from left to right from the direction in which they are located. A single object can be selected and will repeat its callout to help the user find it.

That’s all well for stationary tasks like finding your cane or the couch in a friend’s house. But the system also works in motion.

The team recruited seven blind people to test it out. They were given a brief intro but no training, and then asked to accomplish a variety of tasks. The users could reliably locate and point to objects from audio cues, and were able to find a chair in a room in a fraction of the time they normally would, and avoid obstacles easily as well.

This render shows the actual paths taken by the users in the navigation tests

Then they were tasked with navigating from the entrance of a building to a room on the second floor by following the headset’s instructions. A “virtual guide” repeatedly says “follow me” from an apparent distance of a few feet ahead, while also warning when stairs were coming, where handrails were and when the user had gone off course.

All seven users got to their destinations on the first try, and much more quickly than if they had had to proceed normally with no navigation. One subject, the paper notes, said “That was fun! When can I get one?”

Microsoft actually looked into something like this years ago, but the hardware just wasn’t there — HoloLens changes that. Even though it is clearly intended for use by sighted people, its capabilities naturally fill the requirements for a visual prosthesis like the one described here.

Interestingly, the researchers point out that this type of system was also predicted more than 30 years ago, long before they were even close to possible:

“I strongly believe that we should take a more sophisticated approach, utilizing the power of artificial intelligence for processing large amounts of detailed visual information in order to substitute for the missing functions of the eye and much of the visual pre-processing performed by the brain,” wrote the clearly far-sighted C.C. Collins way back in 1985.

The potential for a system like this is huge, but this is just a prototype. As systems like HoloLens get lighter and more powerful, they’ll go from lab-bound oddities to everyday items — one can imagine the front desk at a hotel or mall stocking a few to give to vision-impaired folks who need to find their room or a certain store.

“By this point we expect that the reader already has proposals in mind for enhancing the cognitive prosthesis,” they write. “A hardware/software platform is now available to rapidly implement those ideas and test them with human subjects. We hope that this will inspire developments to enhance perception for both blind and sighted people, using augmented auditory reality to communicate things that we cannot see.”

SpaceX rocket will make a pit stop 305 miles up to deploy NASA satellites before moving on

Tuesday is the planned launch for a SpaceX Falcon 9 carrying two payloads to orbit — and this launch will be an especially interesting one. A set of five communications satellites for Iridium need to get to almost 500 miles up, but a NASA mission has to pop out at the 300 mile mark. What to do? Just make a pit stop, it turns out.

Now, of course it’s not a literal stop — the thing will be going thousands of miles per hour. But from the reference frame of the rocket itself, it’s not too different from pulling over to let a friend out before hitting the gas again and rolling on to the next destination.

What will happen is this: The rocket’s first stage will take it up out of the atmosphere, then separate and hopefully land safely. The second stage will then ignite to take its payload up to orbit. Usually at this point it’ll burn until it reaches the altitude and attitude required, then deploy the payload. But in this case it has a bit more work to do.

When the rocket has reached 305 miles up, it will dip its nose 30 degrees down and roll a bit to put NASA’s twin GRACE-FO satellites in position. One has to point toward Earth, the other toward space. Once in position, the separation system will send the two birds out, one in each direction, at a speed of about a foot per second.

The one on the Earth side will be put into a slightly slower and lower orbit than the one on the space side, and after they’ve spread out to a distance of 137 miles, the lower satellite will boost itself upwards and synchronize with the other.

That will take a few days, but just 10 minutes after it sends the GRACE-FOs on their way, the Falcon-9 will resume its journey, reigniting the second stage engine and bringing the Iridium NEXT satellites to about 485 miles up. There the engine will cut off again and the rest of the payload will be delivered.

So what are these high-maintenance satellites that have to have their own special deployments?

The Iridium NEXT satellites are the latest in a series of deployments commissioned by the space-based communications company; they’re five of a planned 75 that will replace its old constellation and provide worldwide coverage. The last launch, in late March, went off without a hitch. This is the only launch with just five birds to deploy; the previous and pending launches all had 10 satellites each.

GRACE-FO is a “follow-on” mission (hence the FO) to GRACE, the Gravity Recovery and Climate Experiment, and a collaboration with the German Research Centre for Geosciences. GRACE launched in 2002, and for 15 years it monitored the presence and changes in the fresh water on (and below) the Earth’s surface. This has been hugely beneficial for climate scientists and others, and the follow-on will continue where the original left off.

The original mission worked by detecting tiny changes in the difference between the two satellites as they passed over various features — these tiny changes indicate how mass is distributed below them and can be used to measure the presence of water. GRACE-FO adds a laser ranging system that may improve the precision of this process by an order of magnitude.

Interestingly, the actual rocket that will be doing this complicated maneuver is the same one that launched the ill-fated Zuma satellite in January. That payload apparently failed to deploy itself properly after separating from the second stage, though because it was a classified mission no one has publicly stated exactly what went wrong — except to confirm that SpaceX wasn’t to blame.

The launch will take place at Vandenberg Air Force Base at 12:47 tomorrow afternoon Pacific time. If it’s aborted, there’s another chance on Wednesday. Keep an eye out for the link to the live stream of this unique launch!

Tiny house trend advances into the nano scale

All around the world, hip young people are competing to see who can live in the tinest, quirkiest, twee-est house. But this one has them all beat. Assembled by a combination of origami and nanometer-precise robot wielding an ion beam, this tiniest of houses measures about 20 micrometers across. For comparison, that’s almost as small as a studio in the Lower East Side of Manhattan.

It’s from the Femto-ST Institute in France, where the tiny house trend has clearly become an obsession. Really, though, the researchers aren’t just playing around. Assembly of complex structures at this scale is needed in many industries: building a special radiation or biological sensor in place on the tip of an optical fiber could let locations be probed or monitored that were inaccessible before.

The house is constructed to show the precision with which the tools the team has developed can operate. The robot that does the assembly, which they call μRobotex, isn’t itself at the nano scale, but operates with an accuracy of as little as 2 nanometers.

The operator of μRobotex first laid down a layer of silica on the tip of a cut optical fiber less than the width of a human hair. They then used an ion beam to cut out the shape of the walls and add the windows and doors. By cutting through some places but only scoring in others, physical forces are created that cause the walls to fold upwards and meet.

Once they’re in place, μRobotex switches tools and uses a gas injection system to attach those surfaces to each other. Once done, the system even “sputters” a tiled pattern on the roof.

Having built this house as a proof of concept, the team is now aiming to make even smaller structures on the tips of carbon nanotubes — ones that could comfortably pass through the house’s windows.

The researchers published their methods in the Journal of Vacuum Science and Technology.

Tiny house trend advances into the nano scale

All around the world, hip young people are competing to see who can live in the tinest, quirkiest, twee-est house. But this one has them all beat. Assembled by a combination of origami and nanometer-precise robot wielding an ion beam, this tiniest of houses measures about 20 micrometers across. For comparison, that’s almost as small as a studio in the Lower East Side of Manhattan.

It’s from the Femto-ST Institute in France, where the tiny house trend has clearly become an obsession. Really, though, the researchers aren’t just playing around. Assembly of complex structures at this scale is needed in many industries: building a special radiation or biological sensor in place on the tip of an optical fiber could let locations be probed or monitored that were inaccessible before.

The house is constructed to show the precision with which the tools the team has developed can operate. The robot that does the assembly, which they call μRobotex, isn’t itself at the nano scale, but operates with an accuracy of as little as 2 nanometers.

The operator of μRobotex first laid down a layer of silica on the tip of a cut optical fiber less than the width of a human hair. They then used an ion beam to cut out the shape of the walls and add the windows and doors. By cutting through some places but only scoring in others, physical forces are created that cause the walls to fold upwards and meet.

Once they’re in place, μRobotex switches tools and uses a gas injection system to attach those surfaces to each other. Once done, the system even “sputters” a tiled pattern on the roof.

Having built this house as a proof of concept, the team is now aiming to make even smaller structures on the tips of carbon nanotubes — ones that could comfortably pass through the house’s windows.

The researchers published their methods in the Journal of Vacuum Science and Technology.

Share