icemetalpunk2
IceMetalPunk
icemetalpunk2

“Apple is giving your data to OpenAI, which cannot be allowed here!” *Squints at implied fine print* “You must use Grok and give your data to TwitX and me instead!”

“Democracy is the best possible system! Everyone should get a vote! No qualifications are needed, just get votes, that’s ideal!”

Democracy: “This YouTuber is pretty popular... let’s put him in charge because he’s popular.”

“It says here that bark is a male golden retriever, and its emotion is... ‘squirrel’? Hm, I think we have some bugs, that can’t be right.”
“No, no. That’s right. Squirrel is definitely a dog emotion.”

I don’t underestimate the evil of people, I just don’t think the worst flaws and possibilities should be the main focus of the majority of AI-related articles.

I refer you to kitchen knives. Can they be used to stab people to death? Sure. Do people actually do that? Absolutely. Will people in general stop stabbing

It’s not to “steal peoples’ likenesses” ffs. Can that be done? Sure. But that’s not intrinsic to the tech, and fully AI generated content is the ultimate goal, not replications of real people.

My issue with the article is that it analyzes some amazing new tech, and then spends the majority of the discussion complaining about the one, relatively small, imperfection in it, instead of discussing how it works (not just a single sentence of what it does) or potential use cases, etc. It’s just... “this can do a

“You really should look up who exactly the Luddites really were.”

They were textile workers in the 1800s who destroyed weaving machines for fear/outrage that they were replacing human labor, and eventually had to be forcibly suppressed before they caused too much damage. What important bit of information do you think

Humanity is why humanity can’t have nice things. Why can’t we have access to this now? Because we’re worried humans will be shitty with it. We have an amazing new model that does amazing things, but 75% of the focus of the article -- including the headline -- are “but weird teeth”. *Sigh*

Current* models. Also, “has weird teeth” is quite far from “doesn’t understand humans at all”. It seems like they understand quite a bit about humans, with a few gaps.

Remember when the Luddites kept destroying machines they thought were replacing humans? And now we all respect them and “Luddite” is synonymous with “sane, rational people on the right side of history”, right?

Why is it that a single autonomous car accident is seen as proof these things should never exist, but the

If you use the prompt shown in the headline image, only with the word “white” replaced by “Caucasian”, does it do a better job? I have a hypothesis...

My hypothesis is that this is related to how transformer models (which are used in most modern text-to-image generators to understand the input prompt) work. The TL;DR simplification is that, when trying to understand what a given word in the prompt means, it starts with a base “average” meaning and then adjusts that

Okay, but like... there are so many questions. (1) What model were they even using that was inclined to generate nudity, when so many models are typically trained away from that? (2) What the hell was the actual text of the prompt that said model thought nudity was requested/required/semantically relevant? (3) Why

If Musk didn’t have enough dollar bills to absorb an ocean, he’d be just some crazed asshole running around yelling awful things while we all shifted to the other side of the road to avoid him. Instead, money means he has a platform bigger than his own sense (which, to be fair, only requires a rice grain sized

Through *one* tower. But you’re not going to be driving through several adjacent towers one after another in quick succession if you’re moving at most 60-ish mph.

https://www.cnn.com/travel/article/airplane-mode-reasons-why/index.html

You’ve totally missed the point. It’s not about differing based on the circumstances; it’s about *which* circumstances they’re differing in. It’s about how they want people in power to enforce limits on what other people do, without enforcing any limits on what *they* do. It’s the double-standard desire to force

When did I say that? My point is that if someone intentionally abuses a tool in a dangerous way, the tool is not at fault, the person is. Like, a hammer can help build a house, or murder someone, but if Bob kills someone with a hammer, we don’t say “hammers are too dangerous”, do we? We say BOB is dangerous.

Neither should humans, yet humans do all the time. There’s no chance of us making something better than humans before we’ve even gotten it to be as good as humans. One step at a time; saying “it’s not better than us yet, therefore we should stop trying to get it anywhere close to us” is like saying “a car doesn’t fly,

Guy intentionally prompts AI to confuse it and guide it to saying Joker-esque chaotic shit, then shares only one response with the world and claims the AI is dangerous.

No, sir; you are the dangerous one.