This week, a BBC investigation reported that researchers from Imperial College of London found that a webcam they tested sent data to 52 sources other than its manufacturer. 

52. Let that sink in. Have a webcam sitting connected to the internet at home? How about one on your desk right now or better yet, how many cameras have access to you at this moment?

Yeah, I'd be worried too - our lives have become increasingly connected.

If you excuse the pun, let's face it: the average consumer is unlikely to have special privacy latches on their webcam, that thing that we so frequently expose to both our working and personal lives. And I guess the case isn't much different when it comes to our smartphones - one I purchased from a major manufacturer two years ago maybe had a security patch available one or two times in it's practical and physical lifetime. I'd be a foolish optimist to think I was safe from eavesdropping and the likes, yet I couldn't really justify the purchase of another device - both financially and from an environmental perspective. Obviously, this is just one facet of a bigger problem. 

Which is why, as we become increasingly connected, the privacy concerns of connected devices should be an industry-wide priority.

Moreover, devices should be secure by design with LTS (Long Term Support) patching programmes available. 

This year, more than ever, it has felt like the discussion about privacy as a luxury good has been prominent - just take this recent exchange between the likes of Google and Apple. 

If we let privacy become a luxury good, we have likely failed as a Tech industry as a whole - everyone deserves the right to be secure online, it is a joint responsibility of ours as a society that upholds democratic values and free speech.

Of course, holding data securely also relies on the due diligence, cooperation and education of consumers and businesses alike. The issue is out for all of us to fix. 

Now, having very briefly (and hopefully in a way that's not too much of a rant) outlined some of the problems that currently exist with the way that we are losing our data to multiple sources online, let's take a look at a recent internet trend: deepfake.  

What is it?

In really simple terms Deepfake is essentially a technique which uses AI in order to combine and superimpose an image or a video over another image or a video - you've probably seen examples of this on youtube or maybe even used a 'face swap' filter on a social media app that uses similar technology. 

OK, so why is this worrying?

Well, only last week, some researchers from Samsung's AI centre in Moscow were able to develop a technique that allowed them to create "living portraits" of their models, such as the Mona Lisa.

These are really good, and now, while I'm not a Luddite (I am sure there are great applications for this technology which are good), but in a time of fake news, leaky data and talk of privacy as a luxury, there's something that makes me a bit unsettled. I think we have to consider things carefully and act fast, don't you?