- Joined
- May 24, 2021
- Messages
- 1,014
I don't think so. People don't generally read a lot of Balzac and then write a series of Balzac-like things.
Because humans aren't restricted to just one set of training data, but everything we experience in life trains our representational models. You can pick out influences in people's writing, and some writers do write like other writers.
The AI isn't trained to think like Berkley or to have a rationalization of technology like Berkley.
This is true, there's no sense that it understands what it's making in any deeper sense than the probabilities of a set of pixels appearing in certain positions on a field.
However, we don't really understand how human vision or creativity actually works either - so we don't know whether the understanding is outside of conscious experience or not. Chomsky seems to think 80%+ processing happens outside consciousness.
The next level of adversarial / GAN networks are far beyond what I'm using here (see David Martinez's work above) and it seems to have an intuitive sense for reflections in water and other phenomena which suggests that it has some level of "understanding" - An AI which can rationalise technology may not be that far away.
Personally, I like the impossible nature of the images - it's an aesthetic that's valid in itself even if it's not a perfect representation of "real painting". As starting points for digital paintings they're also useful.