I actually disagree with the fox wearing lipstick, it's just got an unnaturally red tongue in my opinion. But no doubt that a small adjustment could solve that.
I guess it squeezed human-like lips in between the fox's lips that are supposed to be wearing lipstick? Maybe? I still view it as the tongue, though. And I'm not sure about the reflection, but the hands are definitely an improvement. So overall: definitely *better*, but does it really satisfy the prompt now? IDK.
Then again, the polka dotted squirrel wasn't 100% right either, so it's probably fine.
(I also feel like "fox wearing lipstick" is an absurd request in the same way that "whale with long nails" or "mushroom with fake eyelashes" would be, especially if you're not doing it obviously drawn but more photorealistic. It just doesn't make the same kind of "sense" that "cat with top hat" or "polka dotted squirrel" do.)
Thought this might interest you. Tried taking up ChatGPT on one of its parting offers: “Would you like me to make a map of major locations from War and Peace?”
The output was a geographic scatter plot chart with longitude and latitude as x and y axis respectively.
That's pretty funny. Did you ask it to turn that into actual dots on normal-person map?
PS: That was my human-brain reply but I also asked o3 for its reply. This was the first candidate it offered up:
"Love it! Does the scatter actually look like Europe when you squint, or is it more abstract art? If you still have the screenshot / data, would be cool to see."
I think you're pointing out that killing literally all humans on the planet sounds like a stretch. I hesitate to argue against that seemingly reasonable point, since the counterargument sounds so sci-fi. But if we're talking about artificial superintelligence (ASI) then really nothing is totally off the table. Imagine an ASI pursuing some alien goal via self-replicating machinery, growing exponentially until the whole earth is anti-terraformed and incompatible with biological life.
Like Arthur C Clarke said, any sufficiently advanced technology is indistinguishable from magic. (Maybe I'm not helping my case by citing a sci-fi author here.)
My deeper point is that (1) we can say ASI probably won't or can't do something like that, or that we expect not to be so reckless as to build such a thing without safeguards against that, but (2) we're so in the dark about how AI does what it does -- AI is grown more than engineered -- and otherwise so in the dark about how this may all play out that it's hard to push the probability of ASI killing literally all humans very far below 10%. Which is terrifying.
But maybe AGI is not very close and maybe AGI doesn't lead quickly to ASI? My own intuition is that AGI does lead quickly to ASI, but that AGI this decade is implausible. But I'm racked with doubt about that.
I actually disagree with the fox wearing lipstick, it's just got an unnaturally red tongue in my opinion. But no doubt that a small adjustment could solve that.
There are bigger issues with the fox image! The astronaut got a fox hand :) Also it is somewhat unclear what the reflection in the visor is.
Good point! I confess I didn't inspect any of the images too closely, I just looked if the prompt was fulfilled or not.
Forehead-smack. Ok, I edited in ChatGPT's second try on the astronaut+fox image. Better?
I guess it squeezed human-like lips in between the fox's lips that are supposed to be wearing lipstick? Maybe? I still view it as the tongue, though. And I'm not sure about the reflection, but the hands are definitely an improvement. So overall: definitely *better*, but does it really satisfy the prompt now? IDK.
Then again, the polka dotted squirrel wasn't 100% right either, so it's probably fine.
(I also feel like "fox wearing lipstick" is an absurd request in the same way that "whale with long nails" or "mushroom with fake eyelashes" would be, especially if you're not doing it obviously drawn but more photorealistic. It just doesn't make the same kind of "sense" that "cat with top hat" or "polka dotted squirrel" do.)
Definitely no fox 😆
Thought this might interest you. Tried taking up ChatGPT on one of its parting offers: “Would you like me to make a map of major locations from War and Peace?”
The output was a geographic scatter plot chart with longitude and latitude as x and y axis respectively.
That's pretty funny. Did you ask it to turn that into actual dots on normal-person map?
PS: That was my human-brain reply but I also asked o3 for its reply. This was the first candidate it offered up:
"Love it! Does the scatter actually look like Europe when you squint, or is it more abstract art? If you still have the screenshot / data, would be cool to see."
So it seems to appreciate the absurdity at least.
I mean yes it vaguely makes sense as latitude is like a y axis! And it does have them mapped out roughly in the right relationship to one another.
Does the prediction include all nationalities? Then p doom is zero 😆
I think you're pointing out that killing literally all humans on the planet sounds like a stretch. I hesitate to argue against that seemingly reasonable point, since the counterargument sounds so sci-fi. But if we're talking about artificial superintelligence (ASI) then really nothing is totally off the table. Imagine an ASI pursuing some alien goal via self-replicating machinery, growing exponentially until the whole earth is anti-terraformed and incompatible with biological life.
Like Arthur C Clarke said, any sufficiently advanced technology is indistinguishable from magic. (Maybe I'm not helping my case by citing a sci-fi author here.)
My deeper point is that (1) we can say ASI probably won't or can't do something like that, or that we expect not to be so reckless as to build such a thing without safeguards against that, but (2) we're so in the dark about how AI does what it does -- AI is grown more than engineered -- and otherwise so in the dark about how this may all play out that it's hard to push the probability of ASI killing literally all humans very far below 10%. Which is terrifying.
But maybe AGI is not very close and maybe AGI doesn't lead quickly to ASI? My own intuition is that AGI does lead quickly to ASI, but that AGI this decade is implausible. But I'm racked with doubt about that.