Just because I am curious, and have not seen any software capable of fooling me in this regard, yet, what would somebody use to do this? Is this an already existing product that can create video representations of people I know so well it would fool me?
>I have not seen any software capable of fooling me
That belief is a catch-22, though. By definition, each time one fooled you, you didn't note anything other than a run-of-the-mill normal video. A lot of tiktok accounts lately are dedicated to deepfaking celebrities. For example, if I hadn't already told you and you just casually scrolled by it, would you immediately suspect this isn't Jenna Ortega https://www.tiktok.com/@fake_ortegafan/video/732425793067973... ? I didn't look for the best example, that was just the very first that came up.
>Is this an already existing product
Usually cutting edge ML has to be done with a github repo last updated a few days ago using Tensorflow/Pytorch and installing a bazillion dependancies. And then months later you might see it packaged up as a polished product startup website. I've seen this repo a lot https://github.com/chervonij/DFL-Colab
There was a paper linked on here recently (last few months?) that showed off video call deepfaking using gaussian splatting, essentially using a webcam to "puppet" a very convincing 3D recreation of another person's head & shoulders in real-time..
I tried to find the link but my search-fu is not good today it seems..
There's also the fact from the article that this was an employee in Hong Kong on a video call with people supposedly in the UK, so it's also possible they took advantage of bad video quality to do this..
Get on video for the first minute or so, then, as we've all done, say "I'm going to turn off my video so my connection sounds better" etc...
This is where those 'security researchers' are helping to make such fraud easier. If you release these tools into the wild you are enabling criminals who by themselves would have no way to create these tools.
Security through obscurity does not work. As soon as deepfakes have proliferated on TikTok for stupid stuff, they'd inevitably be used for this kind of exploits by any adversary that is motivated enough to do a directed operation on a high value target.
The researchers really just raise awareness on where things are going, but ultimately the solution will be to improve process and verify anything that has to do with money through specific internal company channels that are hard to forge - and anybody in a call like this that would not use them needs to automatically raise an alarm by procedure.
Inventing new tech that has very obvious negative uses and zero positive ones isn't 'security through obscurity', it is security through responsible behavior to say 'maybe I shouldn't'. Just because you can doesn't mean you should.
Just the idea that the perps in this case had the ability to code this all up by themselves is ridiculous, 99.99% of the cyber crime out there is point-and-click from some downloaded tool and maybe 0.01% 'hackers' that use their own tools. Releasing all this junk in easy to use form is a very large factor in the rise of cybercrime. Imagine an outlet on every streetcorner where advanced weapons were given away freely and then to make the claim that since someone could theoretically come up with any of these there is no reason why we shouldn't be giving them out for free. That's roughly the level where we are at.
There is some middle ground between researching how things could be done and releasing those tools to every wannabe criminal on the planet, many of who are in places that you'll never be able to reach from a legal point of view. 1000's of businesses are hacked every day by tools released by 'researchers' to prove that they are oh-so-smart without a shred of consideration for the consequences.
I'm still not sure what you suggest. Do you want to police the world of software, only allowing stuff to be released that has obvious use and limited negative effects? That won't really fly in a liberal society.. people will tinker unless you want to go the dystopian path.
I mean sure, you can nicely ask or try to shame people, but when did that ever do anything of note?
I'm at the point where I see the whole security industry as parasitic. It's an industry that only exists to keep itself and the people active in it employed to the detriment of the rest of society. You want to research security stuff? Cool: keep it to yourself, don't release it. Because if you do release it then the only people that will really benefit from it are the bad guys and no amount of handwaving about how blessed we all are that you're releasing these exploits into the wild (and they are exploits, even if 'deep fakes' look superficially like they are not) for free and bragging rights is beneficial to society. It isn't. Having these skills should come with enough of a sense of responsibility to know how to use them without causing a lot of damage.
All we're doing is enabling a whole new class of criminal that is extra-judicial and able to extort and rob remotely whilst sitting safely on the other side of a legally impenetrable border. As long as that problem isn't solved there is a substantial price tag affixed to giving them further arms for their arsenal. The bulk of them are no better than glorified script kiddies who couldn't create even a percent of the tools that the security researchers give them to go play with.
There are strong parallels between arms manufacturing and the creation of these tools and the release of these tools into the wild. Without that step there would be far less funding for the security industry as a whole and I don't think that's an accident: by enabling the criminal side the so-called 'white' side increases its own market value, they need the blackhats because otherwise they too would be out of a job. Meanwhile the rest of the world is collateral damage, either they see their money stolen (check TFA), they pay through the nose to the 'white hats' to keep their stuff secure (hopefully) or they pay through the nose the black hats due to extortion and theft.
I wished both parties would just fuck off, but only one of these is hopefully amendable to reason.
Being part of the security industry, I'm certainly not impartial, but your view seems to be a bit naive and you seem to be generally angry at the world.
Thing is, when computers permeated society in the 90's, everything looked so simple and wondrous, few people did nefarious stuff and if so mostly for fun. Now during the 2000s computers matured in companies to a degree that they became fundamental infrastructure, and that's where complications start, as someone eventually wants to take advantage of that to make a profit without regards to the means. The Internet bringing the world closer together of course changed the playing field.
Now trust me, many companies would love to sing kumbaya and ignore the topic all together, but that's just a way of presenting oneself as a low hanging target, as many have painfully recognized. And that includes low skill and targeted attacks on all levels. That's why there is a security industry, because IT infrastructure became so fundamental to how we do business.
Now it's a part of everyday life, being a risk the same as other externalities, like market cycle, supply chain and a million other things. The main issue really is, that back in the day nobody cared all that much, so there are few people that got into this branch, and thus there is a constant shortage.
But generally, the kind of stuff like in the article is just one of many security threats both low and high skill that companies are facing and they need a sophisticated system/process to categorize and counteract them (both in terms of prevention and damage mitigation). Unless you manage to remove global inequality and the incentives to exploit affluent entities, this reality just is.
Now I know this sounds grim, but statistically we are currently way better off than just a few decades ago, much less centuries. Things get better. It's just in our human nature to bitch about it anyway. Just take a deep breather and enjoy your shipping free delivery of basically anything you could want at reasonable rates straight from the other side of the world while looking at the bleak news than in no way reflect statistical reality (like, nobody wants to hear how good things work compared to 20-50 years ago, that's boring).
The idea that the criminals are broke, talentless hacks is so wrong. They're the ones with the time and money. Especially more than industry researchers do. If some researcher finds a vulnerability in some widely-used software / device, high chance a malicious actor has already found it or will soon. Not sharing research is how you allow them to operate in the dark.