Divinity reminds me more than a little of Astron 6's fake trailer for Bio-Cop
Divinity reminds me more than a little of Astron 6's fake trailer for Bio-Cop
Yes, but you also have the box office split which is generally lower from overseas. It’s a pretty slim margin. The only thing that makes sense to me is it’s basically a favor to Cameron, and it’s probably not a coincident the news is coming out after the release of Way of Water
For me, it felt like a fan edit that merged some hypothetical Alita trilogy that kept the fights at the expense of scene setting, relationship building, and the better third of a love triangle.
There was a real question as to whether it had broke even. Maybe it was a quiet cult hit on streaming? It seems like an odd pick for a sequel.
I think G Gundam often gets a pass, because people remember the handful of fun moments and forget the bulk of it that’s mind numbingly dull.
I’d like to claim the moral high ground and say it was Miller’s crime spree that keeps me away, but it just looks bad. The cinematography and CGI look awful, and I don’t have a great deal of nostalgia generally or affection for the Keaton Batman specifically.
There are scenes floating around on twitter and it’s real. It looks bad even ignoring the compression.
I’ve heard opposite explanations. The blade has no weight so there’s no kinesthetic sense of where it is. Or that the blade is this weird torrent of energy you need the force to anticipate or guide. I imagine something like moving a gyroscope and having unintuitive force.
Don’t cut yourself on that edge
I think you argue that the trained model itself is derivative of unmodified and untransformed images and data.
The models don’t store images per se, but over fitting of training data means that the models are often capable of outputting trained data given the corresponding inputs.
He does kinda have the doped up Mr. Burns eyes. It kinda also has that overly smooth look of Fornite characters.
It would depend on the model architecture and the training data. You could potentially include the number of fingers in an image as part of the training data and the model might be then make the connection. You could make a model that generated a “skeleton” as part of the image generation and that could emphasize…
I think the end goal is probably to control commercial use of models trained on copyrighted material and their output. Models trained on public domain and licensed images should be fine.
The concept needs to be expanded slightly to protect artists. We can already see people feeding in the names of artists to emulate their styles effectively forcing artists to compete with themselves. The models have an infinite capacity to scale and undercut any artist at any cost. Any new styles will be incorporated…
Essentially, with just a thin bit of abstraction. It’s plagiarism, not inspiration.
I have been using this technology for the last six years in industrial and academic research settings. You and your 8 minute Medium article need to sit this this one out.
It absolutely is and the sleight of hand isn’t even good. It’s a thin, threadbare abstraction from direct theft and pretending the technology is something other than is doesn’t change anything.
Learning is a very crude approximation, describing the process in terms other than high level statistics isn’t accurate to what’s happening. Also, it’s a product that is 100% dependent on having a massive supply of art used without compensation. That’s stealing.
I think the trailer is actually really bad. It cuts together all the bombastic scenes when the anime is much more restrained.