Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
@AndSoWeCode I agree with you, but with this predictive targeting the way Facebook will do it, I think at one point you will do what the ai thinks you'll do but because the ai thinks you'll do this or that and not because you do that and the ai knows that.
You'll behave the way the ai predicted it, not the other way around.
That way the ai will shape your future self, and not you. And this will make humans to slaves of the ai.
I'm overstating here, but you get my point on why I think this is dangerous.
(English is not my main language, I hope it was clear enough) -
Very simple example of how AI could go wrong in a way many people don't think of.
There are algo's which can find out about someone's sexual orientation through a selfie but we're not sure how accurate it is.
What if they deploy this in public crowds in countries where there's a death penalty on being gay or even on social media pictures? Then your selfies are suddenly something very worthy of hiding.
What if this algo predicts criminal actions, Facebook starts working with law enforcement with this information and you're arrested as a precaution while the algo is providing incorrect information? -
Pointer32108yI think that any kind of technology can be used for nasty things if it's in the wrong hands.
That's always been the issue with everything, because some people create things because they can help, but others just like to find out what kind of wrong doing that new tool brought to them. Moral will be always a part of the story, so there's really no much we can do aside from trusting whoever's using the tech.
If we don't trust them, well, there's always a way to bail out, and even if it isn't, you can oppose to it. But here's the catch, as it's normal for humans, we're afraid of what is new and unknown, but also fearing change and trying to avoid it, it's killing humanity progress.
Moral and Ethic are concepts that rely heavily in societies' context, and having a kind of "ethic board" for use and applications of new technologies would be a great way to handle things, but we'd have to agree to share an standard in ethic rules that maybe some countries/religion would find offensive... -
@irene Uhm they literally have deployed a system over here which takes data from what happened and tries to predict which people will be most likely to be involved in crime related stuffs. It's called SyRi and its been running here for a little now.
The second thing isn't bullshit, it's actually been created by Stanford University students. -
@irene no, but if you commit a crime and the ai predicts that you're likely to commit one again in the future, you'll get a higher punishment. This is already the case in some states in the us.

Facebook be like: "We'll never sell your data to other companies. We'll just do ourselves what other companies like Cambridge analytica did, but more aggressive and better"
Apparently they are predicting what you'll do in the future and how you'll behave based on what you did in the past. This will give them the ability to sell ads based on what your future self will do, thus leading to your future self being forged by those ads.
This is extremely dangerous.
Source:
https://theintercept.com/2018/04/...
rant