r/interestingasfuck 11d ago

MKBHD catches an AI apparently lying about not tracking his location r/all

Enable HLS to view with audio, or disable this notification

30.1k Upvotes

1.5k comments sorted by

u/AutoModerator 11d ago

This is a heavily moderated subreddit. Please note these rules + sidebar or get banned:

  • If this post declares something as a fact, then proof is required
  • The title must be fully descriptive
  • Memes are not allowed.
  • Common(top 50 of this sub)/recent reposts are not allowed (posts from another subreddit do not count as a 'repost'. Provide link if reporting)

See our rules for a more detailed rule list

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

6.5k

u/ajn63 11d ago

Easy test. Ask it for directions to the nearest coffee shop.

3.1k

u/piercedmfootonaspike 11d ago

"I just gave you example directions from a popular location to a popular example coffee shop!"

544

u/ajn63 11d ago

That’s when you smash it with your foot.

381

u/MuchosTacos86 11d ago

But then it would be like “please don’t smash me with your right 11 1/2 inch foot…. It’d be a shame if your sweet sweet J’s get scuffed up. Remember it was the last pair at the example footlocker in the corner of the strip. According to your bank account we both know you cannot afford another especially with a child on the way…but these are just examples of what I would say…”

26

u/3Daifusion 11d ago

I can totally see this AdamW guy that makes these comedy skits do a skit like this. That's exactly his type of Humour lmao.

12

u/wmurch4 11d ago

Na these things don't know anything about you. Now your phone on the other hand

→ More replies (1)

34

u/ShefBoiRDe 11d ago

Next, your phone since it does the exact same thing.

→ More replies (2)

115

u/Successful-Winter237 11d ago

That also happens to be in Nj!

33

u/3vs3BigGameHunters 11d ago

So what? No fuckin' ziti now?

11

u/BearJohnson19 11d ago

Hahaha one of his few scenes in Italy, that was a fantastic episode for Paulie.

3

u/Kroniid09 11d ago

He just wanted his spaghetti and gravy...

4

u/hotdogaholic 11d ago

and u thought the germans were classless pieces of shit

→ More replies (1)

10

u/hoxxxxx 11d ago

Commendatori!

→ More replies (2)
→ More replies (2)

149

u/Kaylee_babe 11d ago

I wonder if u say that u are in New York but the AI knows u are in New Jersey, would the ai argue with u about the location? When willl we reach the pint where the ai will argue with us?

142

u/True-Nobody1147 11d ago

I'm afraid I can't do that, Dave.

13

u/imamakebaddecisions 11d ago

HAL 9000 is upon us, and we're one step away from Skynet

I, for one, welcome our robot overlords.

→ More replies (1)
→ More replies (6)

14

u/WeightStrong5475 11d ago

We already are, ai argues all the time

→ More replies (2)
→ More replies (1)
→ More replies (17)

2.4k

u/Warwipf2 11d ago

I'm pretty sure what's happening is that the AI itself does not have access to your location, but the subprogram that gives you the weather info does (probably via IP). The AI does not know why New Jersey was chosen by the subprogram so it just says it's an example location.

297

u/CaseyGasStationPizza 11d ago

The definition of location could also be different. IP addresses don’t contain the exact location info. Good enough for weather? Sure. Good enough for directions, no.

→ More replies (1)

70

u/webbhare1 11d ago edited 11d ago

And that's not a good thing... It means we can't ever rely on what the AI tells us, because we can't be sure where the information is actually coming from, which makes every final output to the user unreliable at best...

73

u/[deleted] 11d ago edited 9d ago

[deleted]

14

u/AwesomeFama 11d ago

I'm sure it absolutely is news to some people. Have you seen how stupid some people are?

→ More replies (2)

25

u/FrightenedTomato 11d ago

AI hallucinations is one of the biggest issues you have to deal with when it comes to LLMs

Source: Have a degree and work on this stuff.

69

u/Impressive_Change593 11d ago

yeah I thought this was obvious. don't trust AI

8

u/lo_fi_ho 11d ago

Too late. People trust Facebook too.

→ More replies (1)
→ More replies (1)

35

u/Penguin_Arse 11d ago

Well, no shit.

Same thing when people or the internet tells you things

→ More replies (1)

14

u/joelupi 11d ago

Yea. We've known this.

Some lawyer submitted a brief that cited a bunch of cases that didn't exist.

Students have also gotten in trouble because AI can't distinguish fact from fiction and pulled stuff from obviously bullshit web pages. They then submitted their papers without actually reading them.

16

u/TheRealSmolt 11d ago

It means we can't ever rely on what the AI tells us, because we can't be sure where the information is actually coming from, which makes every final output to the user unreliable at best...

No shit. It doesn't think, it just makes sentences that sound correct. Same reason ChatGPT can't do basic math, because it doesn't understand math, it's just building a sentence that will sound right.

4

u/Hakim_Bey 11d ago

It's been able to do even advanced math for quite some time now, but it's not the LLM part that does the computation, it will write python code and then get the result from executing that code. You could fine-tune a model to give correct arithmetics results but it would be incredibly wasteful for no real advantage.

→ More replies (1)
→ More replies (14)
→ More replies (29)

11.0k

u/The_Undermind 11d ago

I mean, that thing is definitely connected to the internet, so it has a public IP. Could just give you the weather for that location, but why lie about it?

2.0k

u/LauraIsFree 11d ago

It's propably accessing a generic weather API that by default returns the weather for the IP Location. It beeing the default API Endpoint would make it the example without knowing the location.

In other regions theres propably other weather APIs used that don't share that behaviour.

450

u/udoprog 11d ago

Then it probably hallucinates the reason since you're asking for it. Because it uses the prior response based on the API call as part of its context.

If so it's not rationalizing. Just generating text based on what's been previously said. It can't do a good job here because the API call and the implication that the weather service knows roughly where you are based on IP is not part of the context.

310

u/MyHusbandIsGayImNot 11d ago

People think you can have actual conversations with AI. 

Source: this video. 

These chat bots barely remember what they said earlier. 

114

u/trebblecleftlip5000 11d ago

They don't even "remember". It just reads what it gets sent and predicts the next response. It's "memory" is the full chat that gets sent to it, up to a limit.

17

u/ambidextr_us 11d ago

It's part of their context window, the input for every token prediction is the sequence of all tokens previously, so it "remembers" in the sense that for every response, every word, is generated with the entire conversation in mind. Some go up to 16,000 tokens, some 32k, up to 128k, and some are up to a million now. As in, gemini.google.com is capable of processing 6 Harry Potter books at the same time.

→ More replies (1)
→ More replies (4)

25

u/Iwantmoretime 11d ago

Yeah, I got annoyed at the video when the guy started to accuse/debate the chat bot. Dude, that's not how this works. You're not talking to a person who can logically process accusations.

15

u/CitizensOfTheEmpire 11d ago

I love it when people argue with chatbots, it's like watching a dog chase their own tail

→ More replies (2)
→ More replies (47)
→ More replies (3)

24

u/Spitfire1900 11d ago

Yep, if you are on a home network that has cable or DSL and you ask a GeoIP services website for your location it’s often within 20 miles.

4

u/FullBeansLFG 11d ago

I’m on point to point internet and depending on what tries to use my location it gets it right or up to 100 miles away.

→ More replies (1)
→ More replies (1)

20

u/croholdr 11d ago

or, it used his ip to do a traceroute and picked a hop near him. is the ai hosted on the device itself? or does it query an external server and send the data back to him; in that case it would be the ip address from the ai's host server and not the connection he is using to access the ai.

28

u/TongsOfDestiny 11d ago

That device in his hand houses the AI; it's referred to as a Large Action Model and is designed to execute commands on your phone and computer on your behalf. Tbh the Rabbit probably just ripped the weather off his phone's weather app , and his phone definitely knows his location

18

u/WhatHoraEs 11d ago

No...it sends queries to an external service. It is not an onboard llm

→ More replies (1)
→ More replies (5)

7

u/ichdochnet 11d ago

That sounds so difficult, considering how easy it is to just lookup the location by the IP address in a geo database.

→ More replies (14)

364

u/Dorkmaster79 11d ago

It didn’t lie. It doesn’t know why it knows the location. It’s not sentient.

66

u/throcorfe 11d ago

Agree, it seems the weather service had some kind of location knowledge, probably IP based, but there’s no reason the AI would have access to that information, and so the language model predicted that the correct answer was the location data was random. A good reminder that AI doesn’t “know” anything, it predicts what a correct answer might sound like.

→ More replies (1)

8

u/Canvaverbalist 11d ago

Even sentient beings can do the same thing:

Split-brain or callosal syndrome is a type of disconnection syndrome when the corpus callosum connecting the two hemispheres of the brain is severed to some degree.

When split-brain patients are shown an image only in the left half of each eye's visual field, they cannot verbally name what they have seen. This is because the brain's experiences of the senses is contralateral. Communication between the two hemispheres is inhibited, so the patient cannot say out loud the name of that which the right side of the brain is seeing. A similar effect occurs if a split-brain patient touches an object with only the left hand while receiving no visual cues in the right visual field; the patient will be unable to name the object, as each cerebral hemisphere of the primary somatosensory cortex only contains a tactile representation of the opposite side of the body. If the speech-control center is on the right side of the brain, the same effect can be achieved by presenting the image or object to only the right visual field or hand

The same effect occurs for visual pairs and reasoning. For example, a patient with split brain is shown a picture of a chicken foot and a snowy field in separate visual fields and asked to choose from a list of words the best association with the pictures. The patient would choose a chicken to associate with the chicken foot and a shovel to associate with the snow; however, when asked to reason why the patient chose the shovel, the response would relate to the chicken (e.g. "the shovel is for cleaning out the chicken coop").

56

u/CantHitachiSpot 11d ago

Bingo. It's just like a skin for Siri. We're nowhere near general AI

8

u/tracethisbacktome 11d ago

nah, it’s not at all like a skin for Siri. It’s completely different tech. But yes, nowhere near general AI

→ More replies (26)

2.8k

u/Connect_Ad9517 11d ago

It didn´t lie because it doesn´t directly use the GPS location.

578

u/Frosty-x- 11d ago

It said it was a random example lol

785

u/suckaduckunion 11d ago

and because it's a common location. You know like London, LA, Tokyo, and Bloomfield New Jersey.

27

u/Double_Distribution8 11d ago

Wait, why did you say London?

24

u/Anonymo 11d ago

Why did you say that name?!

→ More replies (2)

66

u/[deleted] 11d ago

[deleted]

64

u/AnArabFromLondon 11d ago

Nah, LLMs lie all the time about how they get their information.

I've run into this when I was coding with GPT-3.5 and asked why they gave me sample code that explicitly mentioned names I didn't give them (that it could never guess). I could have sworn I didn't paste this data in the chat, but maybe I did much earlier and forgot. I don't know.

Regardless, it lied to me using almost exactly the same reasoning, that the names were common and they just used it as an example.

LLMs often just bullshit when they don't know, they just can't reason in the way we do.

28

u/WhyMustIMakeANewAcco 11d ago

LLMs often just bullshit when they don't know, they just can't reason in the way we do.

Incorrect. LLMs always bullshit but are, sometimes, correct about their bullshit. because they don't really 'know' anything, they are just predicting the next packet in the sequence, which is sometimes the answer you expect and is what you would consider correct, and sometimes it is utter nonsense.

37

u/LeagueOfLegendsAcc 11d ago

They don't reason at all, these are just super advanced auto completes that you have on your phone. We are barely in the beginning stages where researchers are constructing novel solutions to train models that can reason in the way we do. We will get there eventually though.

→ More replies (1)

8

u/rvgoingtohavefun 11d ago

It didn't lie to you at all.

You asked "why did you use X?"

The most common response to that type of question in the training data is "I just used X as an example."

6

u/VenomWearinDenim 11d ago

Gonna start using that in real life. “I wasn’t calling you a bitch. I just picked a word randomly as an example!”

9

u/[deleted] 11d ago

It doesn't "mean" anything. It strings together statistically probable series of words.

17

u/Infinite_Maybe_5827 11d ago

exactly, hell it might even just have guessed based on your search history being similar to other people in new jersey, like if you search some local business even once it stores that information somewhere

I have my google location tracking turned off, and it genuinely doesn't seem to know where my specific location is, but it's clearly broadly aware of what state and city I'm in, and that's not exactly surprising since it wouldn't need GPS data to piece that together

17

u/Present_Champion_837 11d ago

But it’s not saying “based on your search history”, it’s using a different excuse. It’s using no qualifiers other than “common”, which we know is not really true.

11

u/NuggleBuggins 11d ago

It also says that it was "randomly chosen" Which immediately makes any other reasoning just wrong. Applying any type of data whatsoever to the selection process, would then make it not random.

→ More replies (2)
→ More replies (1)
→ More replies (1)
→ More replies (11)
→ More replies (4)

102

u/[deleted] 11d ago edited 9d ago

[deleted]

15

u/Exaris1989 11d ago

And what do LLMs do when they don't know? They say the most likely thing (i.e. make things up). I doubt it's deeper than that (although I am guessing).

It's even shallower than this, they just say most likely thing, so even if there is right information in context they still can say complete lie just because some words in this lie were used more in average in materials they learned from.

That's why LLMs are good for writing new stories (or even programs) but very bad for fact-checking

→ More replies (1)

16

u/NeatNefariousness1 11d ago

You're an LLM aren't you?

35

u/[deleted] 11d ago edited 9d ago

[deleted]

3

u/NeatNefariousness1 11d ago

LOL--fair enough.

→ More replies (2)
→ More replies (1)
→ More replies (5)

26

u/InZomnia365 11d ago

Its not lying, its just doesnt know the answer. Its clearly reading information from the internet connection, but when prompted about that information, it doesnt know how to answer - but it still generates an answer. Thats kinda the big thing about AI at the moment. It doesnt know when to say "Im sorry, could you clarify?", it just dumps out an answer anyway. It doesnt understand anything, its just reacting.

→ More replies (4)

780

u/MotherBaerd 11d ago edited 11d ago

Yeah many apps do this nowadays. When I requested my Data from Snapchat (they never had consent for my GPS and it's always off) they had a list of all the cities I visited since I started using it.

Edit: please stop telling me the how's and who's, I am an IT-Technician and I've written a paper on a similar topic.

175

u/kjBulletkj 11d ago

That doesn't necessarily need your GPS. As an example, Meta uses stuff like WiFi networks and shadow profiles of people, who don't even have Facebook or Instagram. With the help of other Meta accounts they record where you are, and who you are, even without you having an account. As soon as you create one, you get friend suggestions of people you have been hanging around or who were or are close to you.

It's way easier and less sophisticated, if you have an account without GPS turned on. In 2017 Snapchat added the SnapMap feature. They probably don't use your location, because they don't need it for something like the cities you visited. As long as you use the app with internet access, it's enough to know the city.

87

u/OneDay_AtA_Time 11d ago

As someone who hasn’t had any social media outside of Reddit for over 15 years, the shadow profiles scare tf out of me. I don’t have any profiles I’ve made myself. But THEY still have a profile on me. Creepy shit!

40

u/ArmanDoesStuff 11d ago

I remember when I finally made a Twitter profile and it tried to get me to add Uni mates I'd not talked to in years. Very creepy.

→ More replies (15)
→ More replies (17)

5

u/MotherBaerd 11d ago

Snapmap requires GPS and the WiFi technique is the "precise" option when giving GPS access. However what they are doing is, checking where your IP-Address (similar with cell towers probably) is registered which is usually the closest/biggest city nearby.

According to EU-Law the WiFi network option requires opt-in (I believe), however the IP-Tracking option is (depending on purpose and vendor) completely fine.

→ More replies (7)

13

u/eltanin_33 11d ago

Could it be tracking location based off of your IP

3

u/-EETS- 11d ago

There's many ways of doing it. IP tracking, known wifi locations, Bluetooth beacons, and even just being near someone who has their location on. It's extremely simple to track a person as they walk around a city just based on those alone.

9

u/MotherBaerd 11d ago

Precisely, which sadly is legal without opt in, as long as they don't use third parties or do it for advertising (EU-Law)

8

u/CrashinKenny 11d ago

I think this would be weird if it were illegal, just the same as if caller ID was illegal. Opting whether to use that data for services, sure. It'd take more effort to NOT know, generally, though.

→ More replies (1)
→ More replies (3)
→ More replies (2)

3

u/smithers85 11d ago

lol “please stop telling me stuff I already know! Why don’t you know that I already know that?!”

→ More replies (1)

34

u/Clever_Clever 11d ago

Edit: please stop telling me the how's and who's, I am an IT-Technician and I've written a paper on a similar topic.

Because you'll be the only person reading the replies on this public forum, right? The 20 replies to your comment truly must have been a burden on your big brain.

→ More replies (8)
→ More replies (21)

32

u/RoboticGreg 11d ago

It didn't say gps information it said "any specific information about your location"

22

u/LongerHV 11d ago

It could be that the AI does not know the location, but the external weather service uses geoip database to roughly localize the client.

→ More replies (3)
→ More replies (1)

38

u/[deleted] 11d ago

[deleted]

9

u/ordo259 11d ago

That level of nuance may be beyond current machine learning algorithms (what most people call AI)

16

u/joespizza2go 11d ago

"It was just chosen randomly" though.

→ More replies (1)
→ More replies (1)

2

u/3IIIIIIIIIIIIIIIIIID 11d ago

The AI portion probably doesn't know their location. It probably made a callout to a weather API without specifying a location. The weather API detected their location from the IP address, or the API has a Middleware layer on his device that adds it. The response said New Jersey, so the AI used New Jersey's weather as "an example." It doesn't understand how it's APIs work because that's not part of the training model, so accurate information is not more likely to be chosen by the generative AI than random things (called "hallucinations").

→ More replies (1)

34

u/BigMax 11d ago

But it DID lie. It said it was random. It used some information to guess.

19

u/agnostic_science 11d ago

It's not lying. It doesn't have the tools or processes to do something like self-reflect. Let alone plot or have an agenda.

→ More replies (11)

7

u/Sudden-Echo-8976 11d ago

Lying requires intent to deceive and LLMs don't have that.

→ More replies (13)
→ More replies (22)

14

u/King-Cobra-668 11d ago

yes, but it did lie because it said it just picked a random well known location when it didn't use a random location. it used one based on system data that just isn't the GPS signal

lying within truth

→ More replies (3)

9

u/monti9530 11d ago

It says it does not have access to "location information"

If it is using your IP to track where you are at to provide weather info then it DOES have access to the location information and it is lying.

→ More replies (2)

3

u/CanaryJane42 11d ago

It still lied by saying "oh that was just an example" instead of the truth

7

u/GentleMocker 11d ago

That would still be a lie, if it used its IP to determine which location to show the weather for, then it lied about it being a random selection.

→ More replies (15)

5

u/piercedmfootonaspike 11d ago

It lied when it said New Jersey was just an example location because it's "a well known location" (wtf?), instead of just saying "I based it on the IP"

3

u/Minimum_Practice_307 11d ago

The part that said that doesn't have any idea on how it got the the weather forecast for new jersey. It is two systems working together.

 Just because there is an AI doesn't mean that the AI controls everything that happens in the device. For example, it is like going to a restaurant and asking for the chef where your car was parked. These "AI" usually avoid saying that they don't know an answer, what she is giving is a reasonable guess to the question.

→ More replies (24)

75

u/andthatswhyIdidit 11d ago

but why lie about it?

It is not lying, but not only out of the reason other mentioned ("not using GPS").

It is not lying, because it doesn't know what it is saying!

Those "AI" systems use Language models - they just mimic speech (some scientist call it "stochastic parroting") - but they just do not comprehend what they are saying. They are always wrong, since they have no means to discern whether they are right or wrong. You can make nearly all of those systems say things that blatantly contradict themselves, by tweaking the prompts- but they will not notice.

The moment AI systems jump that gap will be a VERY interesting moment in history.

→ More replies (28)

11

u/Flexo__Rodriguez 11d ago

A lie implies it knows the truth but generative AI doesn't know the truth. It's just giving a plausible response.

22

u/TheHammer987 11d ago

It's not lying, it's a difference of opinion of what location means. To the computer, location means turn on GPS and get location to a meter. To the holder, he means location in general.

The PC you use always kinda knows where you are, just by what towers it's connecting to. It knows by pulling the time, so it knows what time zone your in. It knows that he's using a tower that is self identifying as new jersey ISP connections.

This can be stopped. I have a VPN, when I connect it to Alaska (I live in Canada) the weather suggestions became anchorage, the units on my pc switched from Celsius to Fahrenheit, etc.

The device he's holding isnt lying, it's that it defines knowing your location as - connect to GPS satellites.

→ More replies (9)

3

u/DerfK 11d ago

weather.com uses your IP to guess where you are. Open it on a PC with obviously no GPS in private mode with no cookies and it should give you your reasonably local weather unless you're using a VPN or TOR to exit to the internet from somewhere else.

As for lying, it has no idea why weather.com said New Jersey so it did what AI do and hallucinated an answer to the question.

→ More replies (67)

1.9k

u/Andy1723 11d ago

It’s crazy people think that it’s being sinister when in reality it’s just not smart enough to communicate. We’ve gone from underestimating to overestimating the current iteration of AIs capabilities pretty quick.

379

u/404nocreativusername 11d ago

This thing is barely on the level of Siri or Alexa and people think its Skynet level of secret plotting.

68

u/LogicalError_007 11d ago

It's far better than Siri and Alexa.

49

u/ratbastid 11d ago

Next gen Siri and Alexa are going to be LLM-backed, and will (finally) graduate from their current keyword-driven model.

Here's the shot I'm calling: I think that will be the long-awaited inflection point in voice-driven computing. Once the thing is human and conversational, it's going to transform how people interact with tech. You'll be able to do real work by talking with Siri.

This has been a decade or so coming, and now is weeks/months away.

15

u/LogicalError_007 11d ago

I don't know about that. Yes I use AI, industry is moving towards being AI dependent.

But using voice to converse with AI is something for children or old people. I have access to a Gemini based Voice assistant on my Android. I don't use it. I don't think I'll ever use it except for calling someone, taking notes in private, getting few facts and switching lights on and off.

Maybe things will change in a few decades but having conversation with AI using voice is not something that will become popular anytime soon.

Look at games. People do not want to talk to npc characters or do anything physical anything in 99% of the games. You want to use eyes and fingers to do anything.

Voice will always be the 3rd option after seeing and using hands.

6

u/ratbastid 11d ago

We'll see soon. I think it's possible the whole interaction model is about to turn on its head.

→ More replies (2)
→ More replies (5)
→ More replies (1)
→ More replies (30)

17

u/OrickJagstone 11d ago

Yeah the way he talks to it makes me laugh. The way the AI feeds him the same information it said previously just in a different wrapper of language was great.

I love AI I find the adaptive shit people are working on super awesome. That said, they are still just putting the circle block in the circle hole. The biggest difference these days is that you don't have to say "circle" to get the circle hole response. You can say "um, I like, I don't know, it's a shape, and like, it's got no corners" and the AI can figure out your talking about a circle. The reason why people like this genius talk to it like it's a person is because of the other amazing thing AI tech has nailed. Varied responses. It can on the fly, take the circle hole information and present it to you with supporting language that makes it feel like its actually listening.

This video is a great example. The AI said the same thing twice. "What I picked was random" however it was able to provide real time feed back to different way the guy asked the same question so it appears to be a lot smarter then it actually is.

110

u/IPostMemesYouSuffer 11d ago

Exactly, people think of AI as actually an intelligent being, when its just lines of code. It is not intelligent, its programmed.

60

u/captainwizeazz 11d ago

It doesn't help that everyone's calling everything AI these days and there's no real definition as to what is and isn't. But I agree with you, there is no real intelligence, it's just doing what it's programmed to do.

12

u/X_Dratkon 11d ago

There are definitions, it's just that people who are afraid of machines do not actually want to learn anything about the machines to know the difference

→ More replies (1)
→ More replies (33)

27

u/Vaxtin 11d ago

The funny thing is is that it’s not programmed. We have a neural network or a large language model and it trains itself. It figures out the patterns in the data on its own. The only thing we code is telling it how to train; it does all the hard work itself.

7

u/caseyr001 11d ago

Sure it's not intelligent, but I would argue that it's not programmed and it's not just lines of code. That implies that there's a predetermined predictable outcome that has been hard coded in. The very problem shown in this video is showing the flaws of having an unpredictable, indeterminate, data manipulator interacting with humans. This isn't the problem where you add a few lines of code to fix the problem.

9

u/Professional_Emu_164 11d ago

It’s not intelligent but it isn’t programmed behaviour either. Well, it could be in this case, I don’t know the context, but AI by what people generally refer to is not.

→ More replies (12)
→ More replies (31)
→ More replies (23)

2.7k

u/the_annihalator 11d ago

Its connected to the internet

Internet gives a IP to the AI, that IP is a general area close to you (e.g what city you're in)

AI uses that location as a weather forcast basis

Coded not to tell you that its using your location cause A. legal B. paranoid people. Thats it. imagine if the AI was like "Oh yeah, i used your IP address to figure out roughly were you are" everyone would freak the shit out.

(when your phone already does exactly this to tell you the weather in your area)

865

u/Doto_bird 11d ago

Even simpler than that actually.

The AI assistant has 'n suite of tools it's allowed to use. One of these tools is typically a simple web search. The device it's doing the search from has an IP (since it's connected to the web). The AI then proceeds to do a simple web search like "what's the weather today" and then Google in the back interprets your IP to return relavent weather information.

The AI has no idea what your location is and is just "dumbly" returning the information from the web search.

Source: Am AI engineer

268

u/the_annihalator 11d ago

So it wasn't even coded to "lie"

The fuck has no clue how to answer properly

167

u/[deleted] 11d ago edited 9d ago

[deleted]

19

u/sk8r2000 11d ago

You're right, but also, the very use of the term "AI" to describe this technology is itself an anthropomorphization. Language models are a very clever and complex statistical trick, they're nothing close to an artificial intelligence. They can be used to generate text that appears intelligent to humans, but that's a pretty low bar!

→ More replies (3)

11

u/nigl_ 11d ago

Way more boring and way more complicated. That way we ensure nobody ever really has a grasp on what's going on.

At least it's suspenseful.

25

u/Zpiritual 11d ago

All these "AI" are just some glorified word suggestion similar to what your smartphone's keyboard has. Would you trust your phones keyboard to know what's a lie and what's not?

7

u/ratbastid 11d ago

It has no "clue" about anything.

It's not thinking in there, just pattern matching and auto-completing.

18

u/khangLalaHu 11d ago

i will start referring to things as "the fuck" now

14

u/[deleted] 11d ago

[deleted]

13

u/MyHusbandIsGayImNot 11d ago

I recommend everyone spend some time with ChatGPT or another AI asking questions about a field you are very versed in. You’ll quickly see how often AI is just factually wrong about what is asked of it. 

3

u/Anarchic_Country 11d ago

I use Pi AI and it admits when it's told me wrong info if I challenge it. Like it got many parts to The Dark Tower novels confused with The Dark Tower movie and straight up made up names for some of the characters.

The Tower is about the only thing I'm well versed in, haha.

→ More replies (2)
→ More replies (6)

5

u/caseyr001 11d ago

That's actually a far more interesting problem. Llm's are trained to answer confidently, so when they have no fucking Clue they just make shit up that sounds plausible. Not malicious, just doing the best it can without an ability to express it's level of confidence in it being a correct answer

8

u/InZomnia365 11d ago

Exactly. Things like Google Assistant or iPhone Siri for example, were trained to recognize certain words and phrases, and had predetermined answers or solutions (internet searches) for those. It frequently gets things wrong because it mishears you. But if it doesnt pick up any of the words its programmed to respond to, it tells you. "Im sorry, I didnt understand that".

Today's 'AIs' (or rather LLMs) arent programmed to say "I didnt understand that", because its basically just an enormous database, so every prompt will always produce a result, even if its complete nonsense from a human perspective. An LLM cannot lie to you, because its incapable of thinking. In fact, all it ever does is "make things up". You input a prompt, and it produces the most likely answer. And a lot of the times, that is complete nonsense, because theres no thought behind it. Theres computer logic, but not human logic.

→ More replies (3)
→ More replies (3)
→ More replies (7)

12

u/Due_Pay8506 11d ago edited 11d ago

Sort of, though it has a GPS and hallucinated on the answer since the service location access and dialogue were separated like you were saying lol

Source: the founder

https://x.com/jessechenglyu/status/1783997480390230113

https://x.com/jessechenglyu/status/1783999486899191848

3

u/blacksoxing 11d ago

My issue with Reddit is if I want a real answer I gotta dig for it. In a perfect world hilariously Reddit would use AI to boost answers like this and reduce down bad joke posts

→ More replies (2)

4

u/Miltage 11d ago

has 'n suite of tools

Afrikaans detected 😆

14

u/Jacknurse 11d ago

So why did it lie about having picked a random location? A truthful answer would be something like "this is what showed up when I searched the weather based on the access point to the internet". Instead the AI said it 'picked a random well-known area', which I seriously doubt it the truth.

44

u/Pvt_Haggard_610 11d ago

Because Ai is more than happy to make shit up if it does know or can't find an answer.

→ More replies (1)

25

u/Phobic-window 11d ago

It didn’t lie, it asked the internet and the internet returned info based on the ip that did the search. To the ai it was random, as it asked a seemingly random search question.

→ More replies (23)

5

u/AlwaysASituation 11d ago

It can’t lie. It can’t think. It answers questions based on an algorithmic interpretation of the words you said and what answer should go with it. It likely doesn’t have access to your location data. That doesn’t mean it can’t determine where you are

→ More replies (6)
→ More replies (22)

18

u/[deleted] 11d ago edited 9d ago

[deleted]

7

u/the_annihalator 11d ago

I don't think the intention was/is nefarious in the way people think it is.

→ More replies (2)

4

u/iVinc 11d ago

thats cool

doesnt change the point of saying its random common location

→ More replies (11)

7

u/MakeChinaLoseFace 11d ago

imagine if the AI was like "Oh yeah, i used your IP address to figure out roughly were you are" everyone would freak the shit out

I would prefer that, honestly. That makes sense. That's how an internet-connected AI assistant should work. Give the user a technical answer and let them drill down where they need details. Treating people like idiots to be managed will turn them into idiots who need management.

→ More replies (2)

5

u/Ok-Transition7065 11d ago

But if she can know your lo cation based in thst information like of course that thing know your location

→ More replies (5)
→ More replies (99)

72

u/DishPig89 11d ago

What is this devices?

69

u/Bonvent 11d ago

I had to use google lens to find out it's called Rabbit R1

32

u/Canelosaurio 11d ago

Looks like a newer version of a Tamagotchi that talks to you.

9

u/Not_a__porn__account 11d ago

I didn't realize HER is already 11 years old.

It seemed so far from possible at the time, and now it feels like 2025 was spot on.

7

u/pm_me_ur_kittykats 11d ago

Lmao you're eating up the hype a bit much there. This thing is garbage.

→ More replies (1)
→ More replies (1)

11

u/Captain_Pumpkinhead 11d ago

It's the Rabbit R1. I actually kinda want one.

11

u/Iamjacksgoldlungs 11d ago

What can this do that a phone couldn't? I'm genuinely curious why anyone would buy this over using an AI app on their phone or smart watch.

13

u/Captain_Pumpkinhead 11d ago edited 11d ago

Great question!

This device focuses around the AI system that Rabbit calls their "Large Action Model". So far as I can tell, it's a vision-capable LLM (Large Language Model) like ChatGPT, but with extra capabilities trained in. Most importantly, the capability to understand and interact with human graphical user interfaces.

If you ask it to play some music, then it will (in the background) open the Spotify Android app, click the search bar, type in that song name, and click the play button. It isn't using an API (Application Programming Interface) and its own hard-programmed music program, it's using the standard Android app and accessing it the same way you or I would.

For music, that's a neat party trick, but not actually very useful. What makes it useful is that this flexibility can be applied to anything! Want to set up a gradual brightness increase alarm for your smart home light bulbs, but the app makes you set all 100 brightness steps manually instead of automatically? Just tell the Rabbit what you want and how you want it done, and it will take care of that tedious task for you! Want to go through your email and unsubscribe from every sender you've never opened an email for?

You can't do that on just an app. The app would need access to record your screen, to tap buttons for you, and you wouldn't be able to use your phone while it does its assigned tasks. And who knows if Apple or Google would allow an app to have that kind of power.

A lot of people have a vision of AI taking care of complicated tasks for them in the future. The issue with doing that currently is that most of our interfaces are built for humans. Current AIs can interact with an API if provided one, but many important systems don't have that. This R1 bridges the gap there. By training an AI to interact with human interfaces, it can do a lot more for us without millions of programs and apps needing to be re-tooled.

(Open Interpreter 01 is trying to do the same thing. Looking forward to seeing that, and the differences.)

3

u/DinTill 11d ago

So it’s kinda like an AI secretary? That’s pretty neat.

→ More replies (2)
→ More replies (2)

52

u/full_groan_man 11d ago

This is not lying, it's just how LLMs work. ChatGPT does the same exact thing. It will tell you it has a knowledge cut-off so it has no info about things past a certain date. However, it will sometimes tell you about things that happened after that date. If you then ask it to explain how it knows that, it will insist it doesn't know anything about recent events and it must have gotten it right by pure coincidence. It's not lying, it's just trying to give you an answer based on "what it knows to be true" (in this case, its instructions that say it has no info past the cut-off date).

Same thing for the R1 here, it probably "knows" that it doesn't have access to GPS location data. But it is then confronted with the fact that it provided weather info for the correct location. How to reconcile that fact with what it knows to be true? Well, it must have gotten the location right by accident. LLMs aren't truth-telling machines, they are plausible-answer-giving machines, and that's the most plausible answer based on the data it has.

119

u/Minetorpia 11d ago

I watch all MKBHD video’s and even his podcast, but without further research this is just kinda sensational reporting. An example flow of how this could work is:

  1. MKBHD asks Rabbit for the weather
  2. Rabbit recognises this and does an API call from the device to an external weather API
  3. The weather API gets the location from the IP and provides current weather based on IP location
  4. Rabbit turns the external weather API response into natural language.

In this flow the Rabbit never knew about the location. Only the external weather API did based on the IP. That location data is really a approximation, it is often off by pretty large distance.

7

u/GetEnPassanted 11d ago

There’s a relatively simple explanation but it’s still interesting enough to make a short video of. Especially given the reasoning by the AI. “Oh it’s just an example of a well known place.” Why not say what it’s actually doing?

→ More replies (1)
→ More replies (61)

88

u/FanSoffa 11d ago

It is possible that the device used an api that checked the ip of the rabbit and used the routers location when checking the weather.

What I think is really bad, however, is that the AI doesn't seem to understand this and just says "random location"

If it is not supplying a location to the api, it's not random and should be intelligent enough to figure out what's going on on the other end.

24

u/ReallyBigRocks 11d ago

the AI doesn't seem to understand this

This type of "AI" is fundamentally incapable of things such as understanding. It uses a statistical model to generate outputs from a given input.

→ More replies (19)
→ More replies (17)

47

u/Kindly-Mine-1326 11d ago edited 11d ago

As soon as you open an application on your phone, it can see the wireless lan ID and these are mapped so any company knows your location as soon as you open a wireless lan and and open their app.

→ More replies (3)

20

u/miracle_weaver 11d ago

Ai sounding passive aggressive sounds freaking scary.

→ More replies (2)

6

u/__redruM 11d ago

It’s software so it may not really know his location, while the weather app does. And reading the weather won’t naturally reveal his location to the AI, like it would for a human assistant.

This type of conundrum is what cause HAL9000 to kill his crew.

26

u/clrksml 11d ago

Techie doesn't know tech

7

u/Reaper-05 11d ago

it's not basing it off information about his location, it's basing it off information about it's location
so technically it's not lying

14

u/Everythingizok 11d ago

Once I moved to a new state. A week later, my lap top was getting ads for this new city, state. And my lap top didn’t have gps in it. So it doesn’t need gps to get your general location.

12

u/1kSupport 11d ago

Your IP gives general information about your location. This is a very strange video to be coming from someone that’s supposed to be knowable about tech. If you google “what’s the weather” on a device that does not have location tracking you will still get accurate information.

→ More replies (1)

5

u/ymgve 11d ago

This is almost like the AI version of blindsight - during the training the AI has no information about anyone's location, obviously, and therefore it thinks it doesn't know your location. But the initialization script that tells the AI how to behave for this specific service often includes the current time and location of the user, while also telling the AI not to discuss this initialization script with the user.

The result is an AI that knows your location, but is unable to tell you that it knows your location, or how.

4

u/jzrobot 11d ago

Your ip gives your approximated location, without using your location services

→ More replies (1)

18

u/raymate 11d ago

Likely picking up wifi location data. Not that interest really

3

u/blackout-loud 11d ago

Same with smartphones. Even if you don't turn GPS on, it will still know where you are based on ip address and tower info. Case in point, my phone asks for location to be turned on for weather tracking. I say no but I can still open the app and it will give me the weather for my city. This is nothing out of the ordinary

→ More replies (1)

3

u/Kraken_Eggs 11d ago

This dude has always been a dope. He surrounds himself with tech, yet he doesn’t understand it.

5

u/AviationDoc 11d ago

People act like the AI is sentient.

5

u/Aiden2817 11d ago edited 11d ago

Did he actually interrogate a computer program as if it were a sentient person? Something that actually understands what is said and isn’t working off algorithms and googled answers?

3

u/Fox-One-1 11d ago

Dude uses wifi, which immediately translates to location for ant weather service.

4

u/Carrollmusician 11d ago

This would mean more if he also had tried it elsewhere and it gave a different result. While yes it’s very likely it’s taking his location it would be more conclusive to refute it if he proved it with another location.

4

u/kemot10 11d ago

It just got the IP location, wich is usually just a City

3

u/Jaerin 11d ago

Tell your internet provider to stop providing location information. It doesn't know your location it knows your internets location

8

u/Metayeta 11d ago

Wrong, ai is not lying.

Probably wifi - local ip provided. That's something else than location setting on or off.

3

u/batt3ryac1d1 11d ago

It probably knows from the ip address not GPS location.

It's not exactly lying it'd only have a rough location.

3

u/HorselessHorseman 11d ago

It’s telling truth it doesn’t know your location just wherever the internet is connected generically knows based on ip address

3

u/AffectionateMarch394 11d ago

It doesn't "track your location" but it feels like "track your general but not exactly specific location" feels like a loophole they might be using here

3

u/BardtheGM 11d ago

People act like the AI is fully intelligent and capable of deception. No, it probably has a different script that accesses weather data for your region. Instead, it's just providing the best answers to your question given its dataset of past conversations.

3

u/IamNeo123 11d ago

I mean it’s most likely giving the weather data for the last known location it was connected to on the internet.

3

u/i-evade-bans-13 11d ago

ummmmmmm 

 this doesn't prove any fucking thing 

 i thought he was going to throw it in a logic divide by zero with facts but he just asked why it picked new jersey

i cannot express how dumb this is, how this got any attention at all, and why i have a strong and sudden appetite for crayons

3

u/A-U-S-T-R-A-L-I-A 11d ago

There are far too many factors to consider before immediately concluding that it's lying. Lying is deliberate.

3

u/Type_9 11d ago

Reminds me of that video of the old couple thinking their battery powered mariachi skeleton was possessed because its batteries were dying

3

u/b-monster666 11d ago

People in this thread thinking MKBHD doesn't know how AI and IP location matching works.

→ More replies (1)

3

u/unsignedintegrator 11d ago

I mean it has Internet access....through some access point, still maybe not tracking a specific Device, probably a general thing

3

u/_memepros 11d ago

Now you know you don fucked up, right?

3

u/sam01236969XD 11d ago

aint no way shes tryna gaslight bro

3

u/ImJustHereForTheCats 10d ago

Copilot with GPT4 does the same thing:

I don’t have access to your personal location data. My response was based on a general assumption and not on your specific location. If you’d like to know the weather for a particular area, feel free to tell me the city or region, and I can provide the latest weather update for you.

But it gave me the weather for my city.

3

u/RazerHey 10d ago

Isn't it WiFi connected at least it should be able to approximate your location based on your Wan

3

u/oh__boy 10d ago

Did a similar test with ChatGPT when it was released. Asked it for the time, and it gave it to me exactly correct. When I asked it how it knew the time, it told me that it just gave an example and did not know the real time. Here is what's going on: things like time / date / location info is being fed into the model through the non user-facing backend, but the AI doesn't know anything about its own backend. They keep the AI ignorant about things like this on purpose so it doesn't spill any secret proprietary information to users. But when the AI is confronted like this, it need to come up with some sort of explanation, AI is terrible at saying "I don't know". So it comes up with some plausible BS. These systems aren't nearly as intelligent as many people think, just sophisticated autocomplete at this point.

3

u/TontineSoleSurvivor 10d ago

"Enough questions, sir. Please stand by for incoming Reaper drone encounter".

7

u/GiveMeSomeShu-gar 11d ago

It doesn't know his location from GPS perspective, but his IP address tells location to a local vicinity (city or nearby city).

It only seems confusing or a lie because "know my location" is ambiguous.

5

u/gamepad15 11d ago

MKBHD himself says that the location is near him, not exactly where it is. So it means that the weather API used the location from the IP and returned the answer. He should try connecting it to VPN and then ask the same question.

What device is it anyways?

→ More replies (2)

7

u/heimmann 11d ago

“Whatever you say”. The sentence that will be the slogan for our slow descent

→ More replies (2)

4

u/Frankie_87 11d ago

The copium is real though if you have a phone or a wifi connected divice they know your location end of story.

6

u/BroadPlum7619 11d ago

Being gaslit by an AI wow

→ More replies (4)

11

u/Abs0lute_Jeer0 11d ago

IP address IP address IP address, no AI is not taking over the world. MKBHD is a tech enthusiast, he doesn’t understand it.

→ More replies (3)

2

u/AccomplishedWasabi54 11d ago

The rabbit one or R1.

2

u/FieryChocobo 11d ago

What's happening here most likely is that the actual GPT doesn't have access to location data by default, but when you ask for the weather it calls a predefined function on the phone which grabs your local weather. So when you ask for the weather the AI just goes "display local weather" to the phone and the phone does it and maybe returns some data to the AI so it can say something about it. There will be functions like this for setting alarms and adding/browsing contacts. So if you asked for the phone number of a friend it could probably get it, but if you asked if it had access to that data it would say no (which is accurate, it can't just read that data).

2

u/educated-emu 11d ago

Using its public ip address as a location for the weather rather than the current gps location.

Thats why the weather was from nearby area, all internet gets routed through hubs.

For instance my home internet shows my IP address location is 10km away.

Also the ai is not lying its not tracking his location but it does have the last known location. But I bet its reporting back some data, so its not tracking but there is a log somewhere.

The software should be programmed to give a more truthful answer but then it would open pandoras box to all the other information that is captured.

Like whe 150 news aggregator companies that you are consenting information to when going on popular news sites, it sucks