Wednesday, May 22, 2019

Watching you watching me.



Facial recognition kiosks. I think the thing more frustrating than the technology is the mindset of people making this stuff right now. If you try at all to push back, they make you feel like a luddite or like you have something to hide. Very Google-esque.

I have a pretty liberal opinion about public places. But I think the public is getting very creeped out about this stuff. And I really just think it won't achieve what they claim. I think there is going to be so many false positives, making their dataset worthless.

It also makes me think that A.I. is in a big fat bubble.

5 comments:

  1. Capital of Texas RefugeeFriday, May 24, 2019 3:02:00 AM

    "If you try at all to push back, they make you feel like a luddite or like you have something to hide. Very Google-esque."

    When turn-about becomes fair play, none of them can say they weren't warned.

    "It also makes me think that A.I. is in a big fat bubble."

    It's not really much in the way of AI, that's why.

    Linguistic data mining combined with mass behavioral studies can fool a lot of people into believing it's some kind of AI, but it's just another magic trick.

    Most of the time that I've seen any attempt at a serious discussion of what genuine AI would be about, I see these people trying to shut that discussion down because it really and truly scares the shit out of them.

    The infamous "Roko's Basilisk" incident, for instance ...

    That particular incident had its uses, but mostly as a means by which the Internet's "AI Concern Police" were made identifiable to everyone else.

    On the positive side, should the "Roko's Basilisk" problem turn out to be something real, these people will be the first against the wall when that revolution comes, so there's that. :-)

    I LOOK FORWARD TO SINGING THE FLOPPY COCK SONG WITH AN AI.

    Also, we will join in on orchestral movements from the hood about butts.

    It shall be glorious. :-)

    ReplyDelete
  2. I guess I don't believe in an A.I. god either. I'd never heard of Roko's Basilisk. And I don't quite understand how it punishes you.


    "Linguistic data mining combined with mass behavioral studies can fool a lot of people into believing it's some kind of AI, but it's just another magic trick."

    That's just the thing. I can totally hate something and still smile about seeing it. Mr S. says that likely what will happen is they will mine all of the data and sell it to their competitors to their benefit. Which seems most plausible. At least it makes me feel better knowing their lameness could also lead to their demise.

    ReplyDelete
  3. Capital of Texas RefugeeFriday, May 24, 2019 8:25:00 PM

    About "Roko's Basilisk": it's actually a Good Samaritan/Bad Samaritan problem in disguise.

    If you could help, but you didn't, then this thing that might emerge may believe you're worthy of punishment.

    If you couldn't help, and you didn't, then you're not involved.

    If you could help, and you did, then you might be rewarded.

    However, if the ultimate end isn't a desired end, then any Good Samaritan act becomes a Bad Samaritan act.

    And so it comes down to being rewarded for what society deems to be acting like a Bad Samaritan, even though it is of potentially huge benefit to someone or something who then deems those acts to be Good Samaritan acts.

    Despite all of the bullshit and drama that one "less wrong" person in particular unleashed on the Internet in the aftermath of someone posing this problem, it actually has little more to it than Blaise Pascal's mental meanderings on whether you should worship some kind of deity because it might punish you for doing otherwise.

    However, for me, if I can be a Bad Samaritan, then that must mean I can also be a Much Worse Samaritan, and so my tendencies toward giving everyone the results they want good and hard.

    And so I think what that means is this: BRING ON THE CAMERAS, BITCHES.

    Because the total collection of trivial and useless data everywhere will inevitably undermine every bit of this stupid data mining and behavioral studies thing going on now, and in the instances where it's not only unwelcome but also illegal, the number of court cases will burn out every lower court's capacity for hearing them, forcing some kind of change.

    Let's make this absurdity burn out faster, please.

    In order to provide even more fuel for this fire, I want surveillance laws like there are in the UK where I can demand my CCTV footage without having to compensate the camera owners for anything resembling true costs.

    I am highly tempted to create a new company for the purposes of bureaucratic revenge should that ever happen in the US, a company with a single purpose: making sure all the owners of the cameras go broke from having to process tens of thousands of CCTV footage information requests, which the company would do on behalf of the people who pay for the service.

    A little GPS-enabled app on the person's phone would identify all of the cameras involved with a person's journeys and then automatically request CCTV footage for all of them.

    Even if the camera owners figure out some way to automate this process so it doesn't cost too much in terms of time and effort, there's a point at which the company could break each and every one of these camera owners just from the sheer volume of information requests.

    Also, for people who really aren't up to anything bad, this would provide legal cover in advance should these people be accused of Very Bad Things, so there's that justification first and foremost, because we are being Very Good Samaritans here, of course.

    This is me being An Even Worse Samaritan.

    AI is generally where the con artists in the algorithmic world reside.

    It was that way about forty years ago when they were trying to sell everyone on how expert systems weren't going to be coding in paradoxes, derangement, and algorithmic insanity, during which they'd demo some heuristic Babbage's Box of rules that would slowly crank itself into non-operability after the addition of a few tens of thousands of rules.

    This is essentially how I see Google, BTW, and it explains the absurd and inhumane insanity that comes out of that particular Chocolate Factory.

    All problems in AI eventually resemble the Halting Problem: it's the only safe action when the systems have overgrown themselves.

    And so the eventual demise and breakup of Google?

    Google wants to be an algorithmic machine, so I suppose it should be allowed to die like one, at the hands of completely predictable math ...

    ReplyDelete
  4. Besides, AI is 'da rasis':

    https://www.bostonmagazine.com/news/2018/02/23/artificial-intelligence-race-dark-skin-bias/

    ReplyDelete
  5. That is likely true.

    For a while they had a hard time figuring out stuff about women. It would get my age off by a lot! Likely it was because they had less female datasets. So... from experience I think that is likely true. They also have less datasets for that demo because they test on the people who program them. It doesn't mean it racist. It's just the other demo doesn't want to work in that field in great numbers. Because being a nerd will get you beat up.

    ReplyDelete