Author Archives: Devin Coldewey

Astronauts land safely after Soyuz launch fails at 20 miles up

A fault in a Soyuz rocket booster has resulted in an aborted crew mission to the International Space Station, but fortunately no loss of life. The astronauts in the capsule, Nick Hague (U.S.) and Alexey Ovchinin (Russia) successfully detached upon recognizing the fault and made a safe, if bumpy, landing nearly 250 miles east of the launch site in Kazakhstan. This high-profile failure could bolster demand for U.S.-built crewed spacecraft.

The launch proceeded normally for the first minute and a half, but at that point, when the first and second stages were meant to detach, there was an unspecified fault, possibly a failure of the first stage and its fuel tanks to detach. The astronauts recognized this issue and immediately initiated the emergency escape system.

Hague and Ovchinin in the capsule before the fault occurred.

The Soyuz capsule detached from the rocket and began a “ballistic descent” (read: falling), arrested by a parachute before landing approximately 34 minutes after the fault. Right now that’s about as much detail on the actual event as has been released by Roscosmos and NASA. Press conferences have been mainly about being thankful that the crew is okay, assuring people that they’ll get to the bottom of this and kicking the can down the road on everything else.

Although it will likely take weeks before we know exactly what happened, the repercussions for this failure are immediate. The crew on the ISS will not be reinforced, and as there are only 3 up there right now with a single Soyuz capsule with which to return to Earth, there’s a chance they’ll have to leave the ISS empty for a short time.

The current crew was scheduled to return in December, but NASA has said that the Soyuz is safe to take until January 4, so there’s a bit of leeway. That’s not to say they can necessarily put together another launch before then, but if the residents there need to stay a bit longer to safely park the station, as it were, they have a bit of extra time to do so.

The Soyuz booster and capsule have been an extremely reliable system for shuttling crew to and from the ISS, and no Soyuz fault has ever led to loss of life, although there have been a few issues recently with DOA satellites and of course the recent hole found in one just in August.

This was perhaps the closest a Soyuz has come to a life-threatening failure, and as such any Soyuz-based launches will be grounded until further notice. To be clear, this was a failure with the Soyuz-FG rocket, which is slated for replacement, not with the capsule or newer rocket of the same name.

SpaceX and Boeing have been competing to create and certify their own crew capsules, which were scheduled for testing some time next year — but while the Soyuz issues may nominally increase the demand for these U.S.-built alternatives, the testing process can’t be rushed.

That said, grounding the Soyuz (if only for crewed flights) and conducting a full-scale fault investigation is no small matter, and if we’re not flying astronauts up to the ISS in one of them, we’re not doing it at all. So there is at least an incentive to perform testing of the new crew capsules in a timely manner and keep to as short a timeframe as is reasonable.

You can watch the launch as it played out here:

Copyright compromise: Music Modernization Act signed into law

Musicians are celebrating as the Music Modernization Act, an attempt to drag copyright and royalty rules into the 21st century, is signed into law after unanimous passage through Congress. The act aims to centralize and simplify the process by which artists are tracked and paid on digital services like Spotify and Pandora, and also extends the royalty treatment to songs recorded before 1972.

The problems in this space have affected pretty much every party. Copyright law and music industry practices were, as you might remember, totally unprepared for the music piracy wave at the turn of the century, and also for the shift to streaming over the last few years. Predictably, it isn’t the labels, distributors or new services that got hosed — it’s artists, who often saw comically small royalty payments from streams if they saw anything at all.

Even so, the MMA has enjoyed rather across-the-board support from all parties, because existing law is so obscure and inadequate. And it will remain that way to a certain extent — this isn’t layman territory and things will remain obscure. But the act will address some of the glaring issues current in the media landscape.

The biggest change is probably the creation of the Mechanical Licensing Collective. This new organization centralizes the bookkeeping and royalty payment process, replacing a patchwork of agreements that required lots of paperwork from all sides (and as usual, artists were often the ones left out in the cold as a result). The MLC will be funded by companies like Pandora or Google that want to enter into digital licensing agreements, meaning there will be no additional commission or fee for the MLC, but the entity will actually be run by music creators and publishers.

Previously digital services and music publishers would enter into separately negotiated agreements, a complex and costly process if you want to offer a comprehensive library of music — one that stifled new entrants to the market. Nothing in the new law prevents companies from making these agreements now, as some companies will surely prefer to do, but the MLC offers a simple, straightforward solution and also a blanket license option where you can just pay for all the music in its registry. This could in theory nurture new services that can’t spare the cash for the hundred lawyers required for other methods.

There’s one other benefit to using the MLC: you’re shielded from liability for statutory damages. Assuming a company uses it correctly and pays their dues, they’re no longer vulnerable to lawsuits that allege underpayment or other shenanigans — the kind of thing streaming providers have been weathering in the courts for years, with potentially massive settlements.

The law also improves payouts for producers and engineers, who have historically been under-recognized and certainly under-compensated for their roles in music creation. Writers and performers are critical, of course, but they’re not the only components to a great song or album, and it’s important to recognize this formally.

The last component of the MMA, the CLASSICS Act, is its most controversial, though even its critics seem to admit that it’s better than what we had before. CLASSICS essentially extends standard copyright rules to works created before 1972, during which year copyright law changed considerably and left pre-1972 works largely out of the bargain.

What’s the problem? Well, it turns out that many works that would otherwise enter the public domain would be copyright-protected (or something like it — there are some technical differences) until 2067, giving them an abnormally long term of protection. And what’s more, these works would be put under this new protection automatically, with no need for the artists to register them. That may sound convenient, but it also means that thousands of old works would be essentially copyrighted even though their creators, if they’re even alive, have asserted no intention of seeking that status.

A simple registry for those works was proposed by a group of data freedom advocates, but their cries were not heard by those crafting and re-crafting the law. Admittedly it’s something of an idealistic objection, and the harm to users is largely theoretical. The bill proceeded more or less as written.

At all events the Music Modernization Act is now law; its unanimous passage is something of an achievement these days, though God knows both sides need as many wins as they can get.

This box sucks pure water out of dry desert air

For many of us, clean, drinkable water comes right out the tap. But for billions it’s not that simple, and all over the world researchers are looking into ways to fix that. Today brings work from Berkeley, where a team is working on a water-harvesting apparatus that requires no power and can produce water even in the dry air of the desert. Hey, if a cactus can do it, why can’t we?

While there are numerous methods for collecting water from the air, many require power or parts that need to be replaced, what professor Omar Yaghi has developed needs neither.

The secret isn’t some clever solar concentrator or low-friction fan — it’s all about the materials. Yaghi is a chemist, and has created what’s called a metal-organic framework, or MOF, that’s eager both to absorb and release water.

It’s essentially a powder made of tiny crystals in which water molecules get caught as the temperature decreases. Then, when the temperature increases again, the water is released into the air again.

Yaghi demonstrated the process on a small scale last year, but now he and his team have published the results of a larger field test producing real-world amounts of water.

They put together a box about two feet per side with a layer of MOF on top that sits exposed to the air. Every night the temperature drops and the humidity rises, and water is trapped inside the MOF; in the morning, the sun’s heat drives the water from the powder, and it condenses on the box’s sides, kept cool by a sort of hat. The result of a night’s work: 3 ounces of water per pound of MOF used.

That’s not much more than a few sips, but improvements are already on the way. Currently the MOF uses zicronium, but an aluminum-based MOF, already being tested in the lab, will cost 99 percent less and produce twice as much water.

With the new powder and a handful of boxes, a person’s drinking needs are met without using any power or consumable material. Add a mechanism that harvests and stores the water and you’ve got yourself an off-grid potable water solution going.

“There is nothing like this,” Yaghi explained in a Berkeley news release. “It operates at ambient temperature with ambient sunlight, and with no additional energy input you can collect water in the desert. The aluminum MOF is making this practical for water production, because it is cheap.”

He says that there are already commercial products in development. More tests, with mechanical improvements and including the new MOF, are planned for the hottest months of the summer.

This box sucks pure water out of dry desert air

For many of us, clean, drinkable water comes right out the tap. But for billions it’s not that simple, and all over the world researchers are looking into ways to fix that. Today brings work from Berkeley, where a team is working on a water-harvesting apparatus that requires no power and can produce water even in the dry air of the desert. Hey, if a cactus can do it, why can’t we?

While there are numerous methods for collecting water from the air, many require power or parts that need to be replaced, what professor Omar Yaghi has developed needs neither.

The secret isn’t some clever solar concentrator or low-friction fan — it’s all about the materials. Yaghi is a chemist, and has created what’s called a metal-organic framework, or MOF, that’s eager both to absorb and release water.

It’s essentially a powder made of tiny crystals in which water molecules get caught as the temperature decreases. Then, when the temperature increases again, the water is released into the air again.

Yaghi demonstrated the process on a small scale last year, but now he and his team have published the results of a larger field test producing real-world amounts of water.

They put together a box about two feet per side with a layer of MOF on top that sits exposed to the air. Every night the temperature drops and the humidity rises, and water is trapped inside the MOF; in the morning, the sun’s heat drives the water from the powder, and it condenses on the box’s sides, kept cool by a sort of hat. The result of a night’s work: 3 ounces of water per pound of MOF used.

That’s not much more than a few sips, but improvements are already on the way. Currently the MOF uses zicronium, but an aluminum-based MOF, already being tested in the lab, will cost 99 percent less and produce twice as much water.

With the new powder and a handful of boxes, a person’s drinking needs are met without using any power or consumable material. Add a mechanism that harvests and stores the water and you’ve got yourself an off-grid potable water solution going.

“There is nothing like this,” Yaghi explained in a Berkeley news release. “It operates at ambient temperature with ambient sunlight, and with no additional energy input you can collect water in the desert. The aluminum MOF is making this practical for water production, because it is cheap.”

He says that there are already commercial products in development. More tests, with mechanical improvements and including the new MOF, are planned for the hottest months of the summer.

Zoetrope effect could render Hyperloop tubes transparent to riders

An optical illusion popular in the 19th century could make trips on the Hyperloop appear to take place in a transparent tube. Regularly spaced, narrow windows wouldn’t offer much of a view individually, but if dozens of them pass by every second an effect would be created like that of a zoetrope, allowing passengers to effectively see right through the walls.

It’s an official concept from Virgin Hyperloop One and design house Bjarke Ingels Group (BIG), and in fact was teased back in 2016. Now the companies have shared a video showing how it would work and what it would look like for passengers — though there’s no indication it would actually be put in place in the first tracks.

A zoetrope is a simple apparatus consisting of a cylinder with slits on the sides and a series of sequential or looping images printed on the inside. When the cylinder is spun, the slits blur together to the eye but have the effect of showing the images on the inside clearly as if they are succeeding one another — an elementary form of animation.

The design concept shown is actually a linear zoetrope, in which the images are viewed not as a loop inside a cylinder, but in a long strip. You may have seen these before in the form of animated advertisements visible through the windows of subways.

In the case of the Hyperloop, the tube through which the “pod” moves would have portholes or slit windows placed every 10 meters through which the outside world is visible. At low speeds these would merely zoom by a few per second and might even be unpleasantly strobe-like, but that would smooth out as the pods reach their target speed of 1200 KPH (about 745 MPH).

The team simulated how it would appear in the video below:

Is it really necessary? You could, of course, just provide a faked view of the outside via LCD “portholes” or have people focus on their own little TV screens, like on an airplane. But that wouldn’t be nearly as cool. Perhaps the windows could double as escape or access hatches; as you can see above on the existing test track, there are already regular such holes, so this may be easier than expected to implement.

Of course, it all seems a little premature, since Hyperloop type transport is still very much in prototype form and existing endeavors to bring it to life may in fact never come to fruition. Nevertheless it is a clever and interesting way to solve the problem of preventing people from thinking about the fact that they’re traveling at ludicrous speeds down a narrow tube.

FCC has a redaction party with emails relating to mystery attack on comment system

You may remember the FCC explaining that in both 2014 and 2017, its comment system was briefly taken down by a denial of service attack. At least, so it says — but newly released emails show that the 2014 case was essentially fabricated, and the agency has so aggressively redacted documents relating to the 2017 incident that one suspects they’re hiding more than ordinary privileged information.

As a very quick recap: Shortly after the comment period opened for both net neutrality and the rollback of net neutrality there was a rush of activity that rendered the filing system unusable for a period of hours. This was corrected soon afterwards and the capacity of the system increased to cope with the increased traffic.

A report from Gizmodo based on more than 1,300 pages of emails obtained by watchdog group American Oversight shows that David Bray, the FCC’s chief information officer for a period encompassing both events, appears to have advanced the DDoS narrative with no real evidence or official support.

The 2014 event was not called an attack until much later, when Bray told reporters following the 2017 event that it was. “At the time the Chairman [i.e. Tom Wheeler] did not want to say there was a DDoS attack out of concern of copycats,” Bray wrote to a reporter at Federal News Radio. “So we accepted the punches that it somehow crashed because of volume even though actual comment volume wasn’t an issue.”

Gigi Sohn, who was Wheeler’s counsel at the time, put down this idea: “That’s just flat out false,” she told Gizmodo. “We didn’t want to say it because Bray had no hard proof that it was a DDoS attack. Just like the second time.”

And it is the second time that is most suspicious. Differing on the preferred nomenclature for a four-year-old suspicious cyber event would not be particularly damning, but Bray’s narrative of a DDoS is hard to justify with the facts we do know.

In a blog post written in response to the report, Bray explained regarding the 2017 outage:

Whether the correct phrase is denial of service or “bot swarm” or “something hammering the Application Programming Interface” (API) of the commenting system — the fact is something odd was happening in May 2017.

Bray’s analysis appears sincere, but the data he volunteers is highly circumstantial: large amounts of API requests that don’t match comment counts, for instance, or bunches of RSS requests that tie up the servers. Could it have been a malicious actor doing this? It’s possible. Could it have been bad code hammering the servers with repeated or malformed requests? Also totally possible. The FCC’s justification for calling it an attack seems to be nothing more than a hunch.

Later the FCC, via then-CIO Bray, would categorize the event as a “non-traditional DDoS attack” flooding the API interface. But beyond that it has produced so little information of any import that Congress has had to re-issue its questions in stronger words.

No official documentation of either supposed attack has appeared, nor has the FCC released any data on it, even a year later and long after the comment period has closed, improvements to the system have been made and the CIO who evaded senators’ questions departed.

But most suspicious is the extent to which the FCC redacted documents relating to the 2017 event. Having read through the trove of emails, Gizmodo concludes that “every internal conversation about the 2017 incident between FCC employees” has been redacted. Every one!

The FCC stated before that the “ongoing nature” of the threats to its systems meant it would “undermine our system’s security” to provide any details on the improvements it had made to mitigate future attacks. And Bray wrote in his post that there was no “full blown report” because the team was focused on getting the system up and running again. But there is also an FCC statement saying that “our analysis reveals” that a DDoS was the cause.

What analysis? If it’s not a “significant cyber incident,” as the FBI determined, why the secrecy? If there’s no report or significant analysis from the day — wrong or right in retrospect — what is sensitive about the emails that they have to be redacted en masse? Bray himself wrote more technical details into his post than the FCC has offered in the year since the event — was this information sent to reporters at the time? Was it redacted? Why? So little about this whole information play makes no sense.

One reasonable explanation (and just speculation, I should add) would be that the data do not support the idea of an attack, and internal discussions are an unflattering portrait of an agency doing spin work. The commitment to transparency that FCC Chairman Pai so frequently invokes is conspicuously absent in this specific case, and one has to wonder why.

The ongoing refusal to officially document or discuss what all seem to agree was an important event, whether it’s a DDoS or something else, is making the FCC look bad to just about everyone. No amount of redaction can change that.

Washington sues Facebook and Google over failure to disclose political ad spending

Facebook and Google were paid millions for political advertising purposes in Washington but failed for years to publish related information — such as the advertiser’s address — as required by state law, alleges a lawsuit by the state’s attorney general.

Washington law requires that “political campaign and lobbying contributions and expenditures be fully disclosed to the public and that secrecy is to be avoided.”

Specifically, “documents and books of account” must be made available for public inspection during the campaign and for three years following; these must detail the candidate, name of advertiser, address, cost and method of payment, and description services rendered.

Bob Ferguson, Washington’s attorney general, filed a lawsuit yesterday alleging that both Facebook and Google “failed to obtain and maintain” this information. Earlier this year, Eli Sanders of Seattle’s esteemed biweekly paper The Stranger requested to view the “books of account” from both companies, and another person followed up with an in-person visit; both received unsatisfactory results.

They alerted the AG’s office to these investigations in mid-April, and here we are a month and a half later with a pair of remarkably concise lawsuits. (This appears to be separate from the Seattle Election Commission’s allegations of similar failings by Facebook in February.)

All told Facebook took in about $3.4 million over the last decade, including “$2.5 million paid through political consultants and other agents or intermediaries, and $619,861 paid directly to Facebook.” Google received about $1.5 million over the same period, almost none of which was paid directly to the company. (I’ve asked the AG’s office for more information on how these amounts are defined.)

The total yearly amounts listed in the lawsuits may be interesting to anyone curious about the scale of political payments to online platforms at the state scale, so I’m reproducing them here.

Facebook

  • 2013: $129,099
  • 2014: $310,165
  • 2015: $147,689
  • 2016: $1,153,688
  • 2017: $857,893

Google

  • 2013: $47,431
  • 2014: $72,803
  • 2015: $56,639
  • 2016: $310,175
  • 2017: $295,473

(Note that these don’t add up to the totals mentioned above; these are the numbers filed with the state’s Public Disclosure Committee. 2018 amounts are listed but are necessarily incomplete, so I omitted them.)

At least some of the many payments making up these results are not properly documented, and from the looks of it, this could amount to willful negligence. If a company is operating in a state and taking millions for political ads, it really can’t be unaware of that state’s disclosure laws. Yet according to the lawsuits, even basic data like names and addresses of advertisers and the amounts paid were not collected systematically, let alone made available publicly.

It’s impossible to characterize flouting the law in such a way as an innocent mistake, and certainly not when the mistake is repeated year after year. This isn’t an academic question: if the companies are found to have intentionally violated the law, the lawsuit asks that damages be tripled (technically, “trebled.”)

Neither company addressed the claims of the lawsuit directly when contacted for comment.

Facebook said in a statement that “Attorney General Ferguson has raised important questions and we look forward to resolving this matter with his office quickly.” The company also noted that it has taken several steps to improve transparency in political spending, such as its planned political ad archive and an API for requesting this type of data.

Google said only that it is “currently reviewing the complaint and will be engaging with the Attorney General’s office” and asserted that it is “committed” to transparency and disclosure, although evidently not in the manner Washington requires.

The case likely will not result in significant monetary penalties for the companies in question; even if fines and damages totaled tens of millions it would be a drop in the bucket for the tech giants. But deliberately skirting laws governing political spending and public disclosure is rather a bad look for companies under especial scrutiny for systematic dishonesty — primarily Facebook.

If the AG’s suit goes forward and the companies are found to have intentionally avoided doing what the law required, they (and others like them) would be under serious pressure to do so in the future, not just in Washington, but in other states where similar negligence may have taken place. AG Ferguson seems clearly to want to set a precedent and perhaps inspire others to take action.

I’ve asked the AG’s office for some clarifications and additional info, and will update this post if I hear back.

MyHeritage breach exposes 92M emails and hashed passwords

The genetic analysis and family tree website MyHeritage was breached last year by unknown actors, who exfiltrated the emails and hashed passwords of all 92 million registered users of the site. No credit card info, nor (what would be more disturbing) genetic data appears to have been collected.

The company announced the breach on its blog, explaining that an unnamed security researcher contacted them to warn them of a file he had encountered “on a private server,” tellingly entitled “myheritage.” Inside it were the millions of emails and hashed passwords.

Hashing passwords is a one-way encryption process allowing sensitive data to be stored easily, and although there are theoretically ways to reverse hashing, they involve immense amounts of computing power and quite a bit of luck. So the passwords are probably safe, but MyHeritage has advised all its users to change theirs regardless, and they should.

The emails are not fundamentally revealing data; billions have been exposed over the years through the likes of the Equifax and Yahoo breaches. They’re mainly damaging in connection with other data. For instance, the hackers could put 2 and 2 together by cross-referencing this list of 92 million with a list of emails whose corresponding passwords were known via some other breach. That’s why it’s good to use a password manager and have unique passwords for every site.

MyHeritage’s confidence that other data was not accessed appears to be for a good reason:

Credit card information is not stored on MyHeritage to begin with, but only on trusted third-party billing providers (e.g. BlueSnap, PayPal) utilized by MyHeritage. Other types of sensitive data such as family trees and DNA data are stored by MyHeritage on segregated systems, separate from those that store the email addresses, and they include added layers of security. We have no reason to believe those systems have been compromised.

Of course, until recently the company had no reason to believe the other system had been compromised, either. That’s one of those tricky things about cybersecurity. But we can do the company the credit of understanding from this statement that it has looked closely at its more sensitive servers and systems since the breach and found nothing.

Two-factor authentication was already in development, but the team is “expediting” its rollout, so if you’re a user, be sure to set that up as soon as it’s available.

A full report will likely take a while; the company is planning to hire an external security firm to look into the breach, and is working on notifying relevant authorities under U.S. laws and GDPR, among others.

I’ve asked MyHeritage for further comment and clarification on a few things and will update this post if I hear back.

MyHeritage breach exposes 92M emails and hashed passwords

The genetic analysis and family tree website MyHeritage was breached last year by unknown actors, who exfiltrated the emails and hashed passwords of all 92 million registered users of the site. No credit card info, nor (what would be more disturbing) genetic data appears to have been collected.

The company announced the breach on its blog, explaining that an unnamed security researcher contacted them to warn them of a file he had encountered “on a private server,” tellingly entitled “myheritage.” Inside it were the millions of emails and hashed passwords.

Hashing passwords is a one-way encryption process allowing sensitive data to be stored easily, and although there are theoretically ways to reverse hashing, they involve immense amounts of computing power and quite a bit of luck. So the passwords are probably safe, but MyHeritage has advised all its users to change theirs regardless, and they should.

The emails are not fundamentally revealing data; billions have been exposed over the years through the likes of the Equifax and Yahoo breaches. They’re mainly damaging in connection with other data. For instance, the hackers could put 2 and 2 together by cross-referencing this list of 92 million with a list of emails whose corresponding passwords were known via some other breach. That’s why it’s good to use a password manager and have unique passwords for every site.

MyHeritage’s confidence that other data was not accessed appears to be for a good reason:

Credit card information is not stored on MyHeritage to begin with, but only on trusted third-party billing providers (e.g. BlueSnap, PayPal) utilized by MyHeritage. Other types of sensitive data such as family trees and DNA data are stored by MyHeritage on segregated systems, separate from those that store the email addresses, and they include added layers of security. We have no reason to believe those systems have been compromised.

Of course, until recently the company had no reason to believe the other system had been compromised, either. That’s one of those tricky things about cybersecurity. But we can do the company the credit of understanding from this statement that it has looked closely at its more sensitive servers and systems since the breach and found nothing.

Two-factor authentication was already in development, but the team is “expediting” its rollout, so if you’re a user, be sure to set that up as soon as it’s available.

A full report will likely take a while; the company is planning to hire an external security firm to look into the breach, and is working on notifying relevant authorities under U.S. laws and GDPR, among others.

I’ve asked MyHeritage for further comment and clarification on a few things and will update this post if I hear back.

Forget DeepFakes, Deep Video Portraits are way better (and worse)

The strange, creepy world of “deepfakes,” videos (often explicit) with the faces of the subjects replaced by those of celebrities, set off alarm bells just about everywhere early this year. And in case you thought that sort of thing had gone away because people found it unethical or unconvincing, the practice is back with the highly convincing “Deep Video Portraits,” which refines and improves the technique.

To be clear, I don’t want to conflate this interesting research with the loathsome practice of putting celebrity faces on adult film star bodies. They’re also totally different implementations of deep learning-based image manipulation. But this application of technology is clearly here to stay and it’s only going to get better — so we had best keep pace with it so we don’t get taken by surprise.

Deep Video Portraits is the title of a paper submitted for consideration this August at SIGGRAPH; it describes an improved technique for reproducing the motions, facial expressions, and speech movements of one person using the face of another. Here’s a mild example:

What’s special about this technique is how comprehensive it is. It uses a video of a target person, in this case President Obama, to get a handle on what constitutes the face, eyebrows, corners of the mouth, background, and so on, and how they move normally.

Then, by carefully tracking those same landmarks on a source video it can make the necessary distortions to the President’s face, using their own motions and expressions as sources for that visual information.

So not only does the body and face move like the source video, but every little nuance of expression is captured and reproduced using the target person’s own expressions! If you look closely, even the shadows behind the person (if present) are accurate.

The researchers verified the effectiveness of this by comparing video of a person actually saying something on video with what the deep learning network produced using that same video as a source. “Our results are nearly indistinguishable from the real video,” says one of the researchers. And it’s true.

So, while you could use this to make video of anyone who’s appeared on camera appear to say whatever you want them to say — in your voice, it should be mentioned — there are practical applications as well. The video shows how dubbing a voice for a movie or show could be improved by syncing the character’s expression properly with the voice actor.

There’s no way to make a person do something or make an expression that’s too far from what they do on camera, though. For instance, the system can’t synthesize a big grin if the person is looking sour the whole time (though it might try and fail hilariously). And naturally there are all kinds of little bugs and artifacts. So for now the hijinks are limited.

But as you can see from the comparison with previous attempts at doing this, the science is advancing at a rapid pace. The differences between last year’s models and this years are clearly noticeable, and 2019’s will be more advanced still. I told you all this would happen back when that viral video of the eagle picking up the kid was making the rounds.

“I’m aware of the ethical implications,” coauthor Justus Theis told The Register. “That is also a reason why we published our results. I think it is important that the people get to know the possibilities of manipulation techniques.”

If you’ve ever thought about starting a video forensics company, now might be the time. Perhaps a deep learning system to detect deep learning-based image manipulation is just the ticket.

The paper describing Deep Video Portraits, from researchers at Technicolor, Stanford, the University of Bath, the Max Planck Institute for Informatics, and the Technical University of Munich, is available for you to read here on Arxiv.

Share