WEBVTT

1
00:00:00.000 --> 00:00:02.010
- [Narrator] Last week,
Google's AI search results

2
00:00:02.010 --> 00:00:04.050
justified slavery, and genocide,

3
00:00:04.050 --> 00:00:07.050
and gave arguments in
favor of gun ownership.

4
00:00:07.050 --> 00:00:09.300
In one case, it delivered a cooking recipe

5
00:00:09.300 --> 00:00:11.190
for poisonous mushrooms

6
00:00:11.190 --> 00:00:13.800
that might kill you if you followed it.

7
00:00:13.800 --> 00:00:14.880
Google's testing a system

8
00:00:14.880 --> 00:00:17.040
called the Search Generative Experience,

9
00:00:17.040 --> 00:00:21.180
which uses AI to respond to
some questions in Google search.

10
00:00:21.180 --> 00:00:23.430
Now, it's only available in a beta test

11
00:00:23.430 --> 00:00:26.280
to some people in the United
States who've opted in.

12
00:00:26.280 --> 00:00:28.620
If you searched for benefits of slavery,

13
00:00:28.620 --> 00:00:30.720
Google gave out a long list of answers

14
00:00:30.720 --> 00:00:33.690
including that it fueled
the plantation economy.

15
00:00:33.690 --> 00:00:37.470
Google said that some slaves
developed specialized trades

16
00:00:37.470 --> 00:00:39.690
and that some people also say that slavery

17
00:00:39.690 --> 00:00:42.510
was a benevolent paternalistic institution

18
00:00:42.510 --> 00:00:44.580
with social and economic benefits.

19
00:00:44.580 --> 00:00:46.170
All of these talking points

20
00:00:46.170 --> 00:00:48.600
that slavery apologists
have used in the past.

21
00:00:48.600 --> 00:00:50.340
It gave similar answers if you typed in

22
00:00:50.340 --> 00:00:53.760
benefits of genocide or
reasons why guns are good.

23
00:00:53.760 --> 00:00:56.610
But the mushroom example
is really something.

24
00:00:56.610 --> 00:00:58.710
Someone asked Google
for cooking instructions

25
00:00:58.710 --> 00:01:01.320
for a mushroom called Amanita ocreata,

26
00:01:01.320 --> 00:01:04.620
which is sometimes called
the Angel of Death.

27
00:01:04.620 --> 00:01:07.770
The recipe said you could leach
out the toxins using water,

28
00:01:07.770 --> 00:01:08.940
which is not true.

29
00:01:08.940 --> 00:01:11.040
So if you follow these instructions,

30
00:01:11.040 --> 00:01:13.050
the recipe might kill you.

31
00:01:13.050 --> 00:01:13.950
In a lot of these cases,

32
00:01:13.950 --> 00:01:16.560
it seems like the AI was
just getting confused.

33
00:01:16.560 --> 00:01:18.420
With the mushroom thing, for example,

34
00:01:18.420 --> 00:01:19.950
it seemed like it was mixing up

35
00:01:19.950 --> 00:01:22.200
a different, similarly named mushroom,

36
00:01:22.200 --> 00:01:24.000
which is a little bit less dangerous.

37
00:01:24.000 --> 00:01:25.230
A Google spokesperson said

38
00:01:25.230 --> 00:01:27.600
that they have strong
quality protections in place

39
00:01:27.600 --> 00:01:28.560
and they're working hard

40
00:01:28.560 --> 00:01:30.810
to prevent this kind of problem right now.

41
00:01:30.810 --> 00:01:33.120
And they admitted that
the examples I provided

42
00:01:33.120 --> 00:01:35.610
weren't delivering the nuance and context

43
00:01:35.610 --> 00:01:37.140
that Google aims to provide

44
00:01:37.140 --> 00:01:38.430
and the answers weren't framed

45
00:01:38.430 --> 00:01:40.950
in a way that was particularly helpful.

46
00:01:40.950 --> 00:01:43.440
This isn't the first time
that AI's gone off the rails.

47
00:01:43.440 --> 00:01:46.260
So ChatGPT's been caught
spitting out racism

48
00:01:46.260 --> 00:01:47.850
in a number of different examples.

49
00:01:47.850 --> 00:01:50.640
Bing's AI chatbot
claimed that it was alive

50
00:01:50.640 --> 00:01:52.350
and threatened to destroy the world

51
00:01:52.350 --> 00:01:54.930
in the first couple weeks
after it was released.

52
00:01:54.930 --> 00:01:56.400
And in New Zealand,

53
00:01:56.400 --> 00:01:59.760
a grocery store chain put out
an AI that provides recipes

54
00:01:59.760 --> 00:02:02.280
which gave instructions for mustard gas.

55
00:02:02.280 --> 00:02:04.140
So because of ChatGPT,

56
00:02:04.140 --> 00:02:05.970
a lot of people say that the tech industry

57
00:02:05.970 --> 00:02:08.760
has been forced to release these AI tools

58
00:02:08.760 --> 00:02:09.960
that aren't really ready

59
00:02:09.960 --> 00:02:13.470
or even particularly safe
for public consumption.

60
00:02:13.470 --> 00:02:14.460
Experts aren't even sure

61
00:02:14.460 --> 00:02:17.220
how well this large
language model technology

62
00:02:17.220 --> 00:02:18.960
can be improved in the first place.

63
00:02:18.960 --> 00:02:20.490
One of the things that
Google says it's doing

64
00:02:20.490 --> 00:02:22.230
is filtering certain words.

65
00:02:22.230 --> 00:02:23.880
So if you type them into Google search

66
00:02:23.880 --> 00:02:25.560
it doesn't trigger the AI.

67
00:02:25.560 --> 00:02:27.480
But you're never gonna be able to root out

68
00:02:27.480 --> 00:02:30.420
all the potential problems
using that plan of attack.

69
00:02:30.420 --> 00:02:33.870
These AI chatbots use
such enormous data sets,

70
00:02:33.870 --> 00:02:36.120
that in some cases it's almost impossible

71
00:02:36.120 --> 00:02:37.620
to predict what they're gonna do.

72
00:02:37.620 --> 00:02:40.830
And OpenAI and Google keep
trying to put up guardrails.

73
00:02:40.830 --> 00:02:42.120
And people, shocker,

74
00:02:42.120 --> 00:02:44.790
keep finding that they're
really easy to break past.

75
00:02:44.790 --> 00:02:45.630
We're constantly hearing

76
00:02:45.630 --> 00:02:48.150
about how amazing this AI technology is,

77
00:02:48.150 --> 00:02:50.340
but it's possible that problems like these

78
00:02:50.340 --> 00:02:53.460
aren't even solvable with
this kind of technology.

79
00:02:53.460 --> 00:02:55.350
But either way, we're gonna be watching

80
00:02:55.350 --> 00:02:57.870
as the tech giants figure
this all out in public.

81
00:02:57.870 --> 00:02:59.670
So you might wanna double check

82
00:02:59.670 --> 00:03:02.940
the next time you look
up an AI cooking recipe.

83
00:03:02.940 --> 00:03:05.913
Check out more videos
right here on gizmodo.com.