Should companies pay you for your personal data?

Imagine for a moment that you keep a journal detailing your daily activities: where you went, what you ate, what you bought, what shows you watched, who you talked to… all the usual journal-like things.

Now imagine that every night, your mom sneaks into your room, goes through what you wrote down, and then sells that information to other journal perusers and people who are really interested in what you journaled about.

That’s the current state of data. You might create it, and it might be your data, but someone else is out there making money from it.

It all started with Lou Montulli’s invention of cookies in 1991, and since then, the rest is history. Data collection during this fledgling stage was relatively benign, lacked direct monetary value, and was almost solely utilized for the benefit of the user – but in the last 28 years, big data has grown – and is expected to continue, and become a global market worth over $100 billion by 2022.

There’s a lot of different ways we create data: the sites we visit on the web, the apps we use on our phones, the smart watches that keep track of our fitness routines, the smart home devices we have conversations with, the credit cards and payment services we use to buy things – all of these channels (and much, much more) are used to simultaneously produce and collect data.

It’s almost like we’ve broken the first law of thermodynamics, or created a virtual perpetual motion machine; we create data by accessing data produced by others, which creates more data, which creates more data…

And all of this data is the backbone of an incredibly lucrative industry – and not just financially either. The data we create by liking a photo or by binging on our favorite show on Netflix is not only sold to marketers, it’s analyzed by a myriad of industries. The data one individual creates is almost meaningless, but when millions of daily lives can be studied, trends and behaviors (consumer or otherwise) can come into view with a clarity only made possible by the exorbitant amount of data we create on a daily basis.

An interesting idea

It’s weird to think that while we create all this data, other people are making money off of it. It wouldn’t exist without us users, after all.

This past February, California’s governor, Gavin Newsom, had an interesting idea which will soon be proposed as legislation. The bill, which was written by Common Sense Media, follows hot on the heels of one of the most sweeping user privacy laws passed in the entire nation, and proposes that companies should indeed pay users for the data that is collected from them.

While the details of the bill haven’t been released yet, the hint of a bill such as this exemplifies the growing pains of the tech sector. We’re all just trying to keep up with each other.

The idea of a bill that would put a price on data brings up a few questions, and the possible answers, at least for now, seem to be arbitrary. Without the specifics of the bill, we can only guess as to the method that will be proposed.

There’s a few different ways they could go about this: companies could either pay users directly to access their data, the trading of it could be taxed, or companies could be taxed by the amount of data they have collected or acquired. I’m sure there’s more variations of this – I’m no economist – but I can say each comes with benefits and downfalls.

If companies were obliged to pay users for their data, companies would just stop the mass collection of data, and focus on power users. If there was a price point for every piece of data collected, businesses would naturally gravitate towards the users they could get the most data out of (thus reducing the amount of time spent in the acquisition stage), and this would quite possibly provide a skewed data set.

Directly paying users for their data would probably spell the doom for small businesses’ SEO and ASO campaigns, and would place an undue burden on startups while barely touching the budgets of tech giants like Facebook and Google – and the larger companies tend to be the crux of this whole issue.

Not to mention, the whole point of a bill meant to put money back into users’ pockets wouldn’t work if less data was being collected – in fact, the more data that is produced, collected, and traded, the more wealth users would receive if a bill such as this one were to pass.

Compounding this problem of legally requiring companies to pay for data they collect is that it would be akin to planting one tree, and declaring it a forest. A lot of the focus on this topic seems to be on social media – which does makes up a huge facet of data collection – but it’s not just Instagram and Snapchat that are utilizing our data.

Banks and healthcare are two industries that acquire and analyze vast amounts of user data. These businesses don’t usually conduct data collection themselves, and more often than not acquire data by purchasing it from other companies.

The other two options are much more attractive, but come with their own sets of issues. If we were to place a tax on the trading of user data, would the tax be placed on the profits of the sale, or added on to the purchase? Would there be a set limit to how much data can be traded before being taxed? Again, depending on the scope of taxing the trading of data, an undue burden could be placed on smaller companies.

If there was a tax on the amount of data collected or purchased, again, a lower limit would need to be set. But this is all speculation, and I’m no economist. However, despite listing out these concerns, it’s my opinion that the wealth produced from data traded between companies should, in some way, contribute to the wealth of the users who produced that data. Why? Because…

The future

Remember the 10 Year Challenge? There’s been a lot of speculation that it was a data collection scheme hidden in plain sight, and while the additional user engagement was an added bonus, the true purpose was to help train AI facial recognition software.

Data collection has only (and will continue to) become more personal. Xbox One’s Kinect camera can recognize your facial expressions, determine your emotions, and read your heart rate – and while that opens up amazing possibilities for UX in video games (especially those in the horror genre), it’s incredibly personal data. The iPhone X uses tech similar to this as well.

Just imagine the innumerable ways this tech could be applied today – almost every device has some sort of camera installed, and it’s not necessary to have special hardware to read a user’s emotions – it’s the AI that does the brunt of the work. It’s an almost absolute certainty advertisers will use facial recognition technology paired with emotion recognition technology to study how audiences respond to their ad campaigns. Your smart TV already watches everything you do – soon it’ll be analyzing your mood, too.

The problem is that all of this tech is super cool. I personally can’t wait for VR facial recognition to really come to fruition, but it’s still disconcerting that any developer with access to the back end of this technology could then turn around and sell data about your face to another company that is of no benefit to you.

The whole issue of big data is just plain weird. There’s never been something like this before. The best analogy I can come up with is this: it would be like if a company hosted a focus group, but the subjects of the study paid the rent of the meeting space, paid the company to let them be subjects in the focus group, and then divulged incredibly personal information. And after all that, asked the company, “when can I give you more money?”

We live in a universe where matter can neither be created nor destroyed – everything stays constant. There’s no way to escape the inexorable pull of entropy. Except data. We can make data out of thin air. Every second of every day.

And as AI becomes smarter, companies can do more with the data sets they have – AI is extremely adept at taking in and analyzing vast quantities of individual data points to find patterns the human brain wouldn’t be able to comprehend.

It doesn’t end at facial recognition either – your voice is being analyzed as well. Perhaps the most troublesome form of data collection is biological data. Companies like 23andMe can sell your genetic makeup for a profit – and there’s a whole new set of ethical conundrums to tackle there.

It’s a super complicated issue that will only become more intricate as time goes on, and while the proposed bill in California might not have all the answers (especially since we don’t know the details yet), at least we’re moving in the right direction – recognizing there’s an issue, and having a discussion about it. The tech industry is particularly skilled at collecting and analyzing data – it’s about time we analyze ourselves.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply