Thinking critically about art

I spent much of today writing the new week’s lesson for my online ethics & critical thinking classes. This is more on the critical thinking side – the topic being Art.

I start by showing the kids a picture that “I made”, and I ask them if they would call it “art”. In the three classes I ran tonight, everyone said yes. Then I reveal that I made it by using the online AI art generation program Craiyon. I share the web page live and type in a prompt and show them how it generates pictures. Then I re-ask the question – now that they know the picture was produced by an AI system, is it still “art” or not? Can a computer program produce something that we’re happy to call art”?

Then we talk for a while about the meaning of art. I show a Picasso painting from the Spanish Civil War period, when he produced a lot of artwork with sad imagery. I ask them what feeling they get from it, and many of the kids so far have said sadness. Then I explain why Picasso painted such images, because of his reactions to the war. And ask if knowing that makes them appreciate the art any more. Most of them agreed that it does.

Then I go back and ask is there any possible meaning behind the AI-generated art? If not, does that automatically make it inferior to human-produced art, or not? What if you can’t tell the difference? Does it matter?

And then I go into some possible uses for AI-generated art. And ask the kids what they think it means for the future of human artists.

There’s more to the lesson, diverging into a few other different themes, about destroying art, and whether famous/historical/significant art should be free for the public to view or not. I think it’s a good lesson, and it’s more fun and less stressful for me to teach than last week’s topic on cloning.

New content today:

Cloning pets

I finished up the last lessons of the week’s cloning topic today. In the very last class I raised the same question I’d asked in all the others: Would it be okay to clone dogs or cats if people wanted a new pet that was like their old one? And then I mentioned that the company Viagen exists and has been doing this for 7 years.

And one girl in this class erupted: “Oh my god! I have to clone my dog! I didn’t know they could do this! I’m going to tell my parents! I don’t care how much it costs!”

Okay… I hope her parents will be able to deal with this!

And this evening was lecture 2 of the image processing course at university. We covered image and video formats and then some basic image preprocessing operations. This is the easy stuff before we get stuck into full-on image filtering next week.

For dinner before the lecture I had ramen at a nearby Japanese place, which is fairly good. I had a bad ramen at a different place near the university earlier this year, so I went to the place that I know is good.

New content today:

AI-generated horror

I’ve been playing a bit with all of the cool AI-powered art generation tools that have been unleashed in recent times. I mentioned the other day that I got an invite to DALL-E. Rather than burn through my free credits trying stuff, I’ve been trying random things with Craiyon, a free site that uses DALL-E Mini.

Unrelatedly, I’ve been watching a bunch of horror movies that Netflix has been recommending to me. At some point I veered off into Asian horror films, and there seems to be plenty of them for it to keep recommending to me. I seem to have hit a local maximum in its “you might like this” algorithm, such that nearly everything it recommends to me these days is an Asian horror film.

Making the connection between horror films and AI, I decided to try hitting Craiyon with the prompt: Scene from a Japanese horror film. Here’s what it came back with:

AI generated scene from Japanese horror film

Yep. That’s pretty close to what I expected. A creepy long-haired ghost girl, trading on the Yotsuya Kaidan story and the indelible influence it has had on Japanese horror, via The Ring. Notice also the typically Japanese shoji walls, most noticeable in the bottom left frame.

All right. I’ve also been watching some Chinese horror, so let’s try: Scene from a Chinese horror film:

AI generated scene from Chinese horror film

Interesting! It’s mostly similar long-haired ghost girls, but with a vividly different colour palette. The delicate shoji walls have been replaced by brutalist concrete walls. There are also several apparent victims, lying on the floor in shrouds. And some interestingly creepy pictures on some of the walls.

Next up: Scene from a Korean horror film:

AI generated scene from Korean horror film

More long-haired ghost girls, but with a much greater emphasis on the faces and their blood-curdling expressions. We also have a few boys or young men who might be victims, or perhaps relatives of the ghost girl. The colour palette is a bit more blue/yellow and less green than the Chinese examples.

Okay, let’s try moving away from Asia, to Europe, beginning with: Scene from a French horror film:

AI generated scene from French horror film

Now our walls have curtains and doors. We’ve gone back to a mostly black and white palette. And the long-haired ghost girl is replaced by a range of spooky figures with recent haircuts, or horrified victims – particularly that anguished looking close-up of the woman’s face at centre right. In the bottom left we have what might be a witch hovering by someone’s bedside, waiting to bestow a curse. Definitely a more European classic cinema vibe here.

I’ve also seen a couple of German films recently, which have been fairly modern and based around teenagers getting into spooky situations. Honestly they felt more like Scooby Doo than a serious horror film. So lets try: Scene from a German horror film:

AI generated scene from German horror film

Oooh. Getting some Max Schreck Nosferatu vibes here, although not too explicitly. The exterior farmhouse at top middle is interesting – the first identifiably exterior scene generated so far. Good choice though because, as we all know, farmhouses are 90% more spooky than most other buildings. Definitely more of a vampire feel than ghosts here. And a couple of frames of Nazis, which I suppose is fair enough for the horror genre.

Now let’s try some English-speaking origins. We’ll start with: Scene from a British horror film:

AI generated scene from British horror film

Interesting. I’m not quite sure what to make of this. There seems to be a few people in masks, another creepy outdoor farmhouse, and in the bottom left what looks like a shadowy mob. Intriguing candlelight and shadows.

Contrast with: Scene from an American horror film:

AI generated scene from American horror film

There are definitely a lot more interior rooms here, with doors. I guess American horror hinges a lot more on people lurking through doorways.

And finally: Scene from an Australian horror film:

AI generated scene from Australian horror film

I’m not sure that anything here particularly implies Australia. It just seems to be some more semi-generic ghosty building stuff. I don’t know what that claw-like shadow is in the upper left panel, but it’s nice and spooky.

New content today:

Games night and walky day

Friday night was online games night, so I didn’t have time for a blog entry. We played a bunch of the usual suspects. I tried a very different strategy in Splendor this time, after losing badly last time. Last time I concentrated on buying as many cheap cards as possible to build up my power to purchase things without having to collect lots of gems – but by the time I was ready to get rolling and buy lots of point-scoring cards, the game had ended. Someone told me that that wasn’t a great strategy – you have to start buying more expensive cards earlier. So this time I started buying some of the second-tier cards as soon as I could, saving up lots of games to do so. Only at one point I ran into the rule (that I’d never noticed before) that you have to discard down to 10 gems at the end of your turn. So I wasted a whole turn grabbing extra gems that I then just had to throw away! Anyway, the result was just as bad as last time I played. I was just getting into a groove and planned to purchase some valuable point-scoring cards in my next two turns… and the game ended! I never got those turns, and came last, again.

The other main thing I did on Friday was attend the Standards Australia meeting on photography, which is a follow-up to the ISO meeting I attended in Cologne back in June. I had to present a report on that meeting, which is really the main part of the local meeting, bringing all the Australian experts up to date. It was the first face-to-face meeting we’ve had since COVID began, so it was good to catch up with people again. And we have a new project manager at Standards Australia (who ran our last meeting via Zoom), so it was nice to meet her for the first time.

I slept in a bit this morning. I needed some catching up on sleep, as a neighbour has a baby which has been crying a lot at night recently. I worked mostly on comics today, but took a break early in the afternoon to go on a long walk with my wife and Scully, over to the good bakery. We just had some small pastries for afternoon tea – I had a cheese danish.

New content yesterday:

New content today:

Editing comics

I’m currently reading a book on editing comics that I got from Kickstarter: Filth & Grammar by Shelly Bond. The book is all about what a comics editor does.

One main part is about what a comics editor should look for when giving feedback/requests to the author/artists. One thing it really emphasises is that every page of a comic should have around 3 or 4 bolded words to indicate speech stresses. If a page comes in with no words bolded, send it back and tell them to bold some.

I’ve long noticed that comics seem to do this – have multiple bolded words on each page. But I always thought the propensity of bolded emphasis words in comics was way too overdone. I find it distracting and annoying. I wonder who in the comics industry decided that comics had to have bold-emphasised words on every single page.

In other news, I got an invitation for DALL-E today, and I’ve been playing with it a bit, generating some artwork. Not too much because it’s easy to burn through your 50 free credits and have to start paying money to keep using it.

It also came with some free credit to try using GPT-3 text generation. I gave it this prompt:

Write the rules for a Euro-style board game based on medieval monks brewing beer.

And here’s what GPT-3 produced:

1. The game is played with two to four players.

2. Each player has a monastery, and starts the game with two brewmasters and two workers.

3. The game is played over six rounds. In each round, the players take turns performing one of the following actions:

a. Collecting ingredients: The player rolls two dice. They may choose to collect either the number of hops or barley indicated by the dice. If they choose to collect hops, they take that number of hops from the supply. If they choose to collect barley, they take that number of barley from the supply.

b. Brewing beer: The player spends one hop and one barley to brew one barrel of beer. The player may choose to brew either ale or lager.

c. Selling beer: The player may sell up to three barrels of beer. They may sell either ale or lager, but not both at the same time. They receive three gold pieces for each barrel of ale sold, and four gold pieces for each barrel of lager sold.

4. At the end of each round, the players check to see if they have enough beer to satisfy their workers. If they do not, they must pay one gold piece for each worker that is not satisfied.

5. The game ends after six rounds. The player with the most gold pieces wins the game.

Wow. That’s almost a sensible game. There are some obvious issues, but you could certainly start from this and tweak it into a playable game.

New content today:

Ethics of cloning

I had three more online ethics classes today on the cloning topic. There’s an interesting diversity of opinions among the kids in the classes on this. I ask if it’s okay for a married couple who can’t have a baby naturally to have a baby that’s a clone of the mother.

  • All the kids in one class: No. You shouldn’t clone humans.
  • All the kids in another class: Sounds fine, sure, why not?

I had a follow-up question ready, asking if it’s okay for a single woman to have a clone baby of herself, if she doesn’t want a husband/boyfriend. Obviously there was no point asking this of the first class, so I skipped it. I asked the second class, thinking some of them might object to single women having children. But they all said it would be fine.

We actually start talking about cloning animals and spend half the lesson on that before tackling humans. I ask about cloning farm animals, or endangered species, or pets. So far almost all of the kids have been surprised to learn that there is a company that has been cloning pet dogs and cats since 2015. Some of think that’s okay, while others think it’s a waste of money and people should just get a new pet instead of cloning an old one, and some think it’s bad because there are already plenty of dogs and cats who need homes.

By the end of the lesson we’re discussing why human cloning, which is feasible given our current technology, hasn’t been done yet (as far as anyone knows). How would a person feel if they grow up to learn they are a clone? Most of the answers have been pretty negative.

Also today I had my face-to-face ethics class at the school this morning. It was the last lesson on the topic of tolerating/respecting other people’s beliefs. It was actually really interesting because through this topic several kids changed their minds on one of the main questions: If Anna (a child) believes smoking is okay because her grandmother smokes and is perfectly healthy, should we respect her beliefs? And should we allow her to publish her beliefs in the school newsletter?

Initially the kids were mostly against these, as they all agreed smoking was dangerously bad. But through discussion about how to argue convincingly with someone, by presenting evidence in a polite manner, and letting people make informed decisions by presenting them with multiple ideas, they started to converge on the idea that they should respect Anna’s beliefs, and even let her publish them – as long as opposing evidence was also presented. It was very impressive to see.

The weather today was delightful. Sunny, and the temperature got up to 24.4°C. That’s the warmest day since the middle of May, and very welcome it was after the weeks of chilly winter conditions. This is a real taste of spring, and with flowers bursting all over the place already – magnolias, camellias, azaleas – it really felt like spring today.

And a final thing that happened today: There was a truck accident at the end of my street. A crane truck came down the hill on the main road and lost control, ending up embedded in a pedestrian island in the middle of the cross street (my street), which was planted with shrubs and mulch, so the truck cab ended up stuck deep in the soil and couldn’t be driven out. This blocked the main road in both directions for several hours, and traffic had to be diverted to quite a large detour. I don’t know if anyone was hurt in the accident, but from what I saw it didn’t look like any other vehicles were involved, and the crane driver should have been able to get out okay. The main danger would have been if a pedestrian was crossing the cross street there—like I do regularly—they could easily have been hit. So I’m glad I wasn’t at the time.

New content today:

Four classes in one day

I’ve had to move my Monday ethics extension class to Tuesday because of the university teaching on Monday, so today I had 4 classes instead of 3, which is the most I’ve ever done in a single day. I know it’s not a full 8 hours work, but it’s still exhausting!

At lunch I took Scully for a long walk, down to the Greenwich Point ferry wharf ( a longer walk than the Greenwich wharf, where we take her sometimes too). The weather was nice – sunny and not too cold. It should warm up a bit more this week too, bringing our first hint of spring.

New content today:

Starting image processing again

Today was the first lecture in this year’s iteration of the Image Processing and Pattern Recognition course that I helped teach at the University of Technology Sydney last year. Last year the whole course was taught online via Zoom, but this year we’re doing it face-to-face at the university (like the Data Engineering course I taught earlier this year).

This morning I did the final three online ethics classes with kids on the subject of superstition. Then I took Scully for a short drive over to a nearby suburb to get some lunch, and on the way home pop into the hardware store to buy some damp absorber crystals. This stuff has been out of stock for ages because of high demand from everyone in Sydney dealing with mould problems due to all the rain we’ve been having. But I noticed it in stock at the supermarket last week, and decided to check if the hardware store had any (since it’s much cheaper there). They did, so I bought a couple of large containers.

Then late this afternoon I headed into the city. I stopped off for an earlyish dinner at a Thai street food place, before getting to the university in time for the 6pm start of the class. Being the first class it was a pretty easy introduction and finished early. Next week we’ll get stuck into the technical material and helping students with the tutorial assignment questions.

New content today: