Ai generated picture of a AI bot talking to a human. Turned into a cubomania gif

A couple of days ago I saw a “guess the cubomania” challenge from Theo. I’ve had an interest in Cubomania in the past and played around with the idea a bit. After a chat with D. who gave me a few engravers I googled a bit and guessed, wrongly, Goya.

Next I thought to ask ChatGPT. It suggested it could match by image matching techniques, gave me a fairly obviously wrong first row and ran out of credit.

I then thought to ask Claude to make me an interactive page where I could drag things around. It made a couple of not very good attempts.

I was thinking about a better prompt, when I remembered and asked:

Could we use the whole image for each piece but ‘crop’ it with css?

Claude replied:

Brilliant idea! Yes, we can absolutely use CSS to create a “window” effect where each piece shows only its portion of the full image. This is much more elegant than trying to extract individual pieces.​​​​​​​​​​​​​​​​

I was flattered1 and when Claude came up with another fail I decided to abandon AI and DIY. This turned out a lot better. I started by remembering background-position and finding interact.js . The last time I did any drag and drop I dimly recall some sort of jQuery and a shim for mobile/tablets. interact.js did a grand job for my simple needs. It was probably overkill as it seems to do a lot more.

Cubomania Solver

Partially completed sliding tile puzzle on a yellow background, featuring black and white sketch-style artwork. Some tiles are in place forming parts of faces and figures, while others are missing or scattered around the screen.
Screenshot

It is pretty simple stuff, but potentially a lot of fun, different images, making cubomania puzzles who knows. I did extend it a bit, learning about localStorage (to save any progress) and the dialogue tag. All without AI but few visits to HTML reference – HTML | MDN and the odd search.

I had a lot of fun with this, more than if I had just managed to get either of the AIs it to do the whole thing. What it did make me think of is that AI chat was useful for working out what I wanted to do and how to do it. I could probably have done that bit too all by myself. Usually I just start messing about and see what happens. This points to a bit of planning, or maybe typing some notes/pseudocode/outline might work for me when I am playing.

  1. See: The machine began to waffle – and then the conductor went… In the paper the title was Artificial Intelligence: The Technology that lies to say yes. ↩︎

The Featured Image of this post was generated by ChatGPT in response to ” I want an image of a chatbot character chatting with a person, friendly, helpful & futuristic.” It has been run through Cubomania Gif!

A gif of the terminal running videogrep

I’ve followed the #ds106 daily create for quite a few years now. The other day the invite was to use PlayPhrase

PlayPhrase will assemble a clip of movie scenes all having the same phrase, a small supercut if you will.

The results are slick and amusing.

I remember creating a few Supercuts using the amazing Videogrep python script. I thought I’d give it another go. I’ve made quite a few notes on using Videogrep before, but I think I’ve smoothed out a few things on this round. I thought I might write up the process DS106 style just for memory & fun1. The following brief summary assumes you have command line basics.

I decided to just go for people saying ds106 in videos about ds106. I searched for ds106 on YouTube and found quite a few. I needed to download the video and an srt, subtitle, file. Like most videos on YouTube there are not uploaded subtitles on any of the ds106 videos I choose. But you can download the autogenerated subtitles in vtt format and convert to srt with yt-dlp. The downloading and subtitle conversion is handled by yt-dlp2.

I had installed Videogrep a long time ago, but decided to start with a clean install. I understand very little about python and have run into various problems getting things to work. Recently I discover that using a virtual environment seems to help. This creates a separate space to avoid problems with different versions of things. I’d be lying if I could explain much about what these things are. Fortunately it is easy to set up and use if you are at all comfortable with the command line.

The following assumes you are in the terminal and have moved to the folder you want to use.

Create a virtual environment:

python3 -m venv venv

Turn it on:

source venv/bin/activate

Your prompt now looks something like this:

(venv) Mac-Mini-10:videos john$

You will also have a folder venv full of stuff

I am happy to ignore this and go on with the ‘knowledge’ that I can’t mess too much up.

Install Videogrep:

pip install videogrep

I am using yt-dlt to get the videos. As usual I am right in the middle when I realise I should have updated it before I started. I’d advise you to do that first.

You can get a video and generate a srt file form the YouTube auto generated:

yt-dlp --sub-lang "en" --write-auto-sub -f 18 --convert-subs srt "https://www.youtube.com/watch?v=tuoOKNJW7EY"

Should download the video, the auto generated subtitles and convert them to a srt file!

I edit the video & srt file names to make then easier to see/type

Then you can run Videogrep:

videogrep --input ds106.mp4 --search "ds106"

This makes a file Supercut.mp4 of all the bits of video with the text ‘ds106’ in the srt file.

I did a little editing of the srt file to find and replace ds-106 with ds106, and ds16 with ds106. I think I could work round that by using a regular expression in videogrep.

After trying that I realised I wanted a fragment not a whole sentence, for that you need the vtt file: I can dowmnload that with:
yt-dlp –write-auto-sub –sub-lang en –skip-download “https://www.youtube.com/watch?v= tuoOKNJW7EY”

Then I rename the file to ds106.vtt delete the srt file and run

videogrep --input ds106.mp4 --search "106" –search-type fragment

I shortened ds106 to 106 as vtt files seem to split the text into ds and 106.

I ended up with 4 nice wee Supercut files. I could have run through the whole lot at once but I did it one at a time.

I thought I could join all the videos together with ffmpeg, but ran into bother with dimensions and formats so I just opened up iMovie and dragged the clips in.

at the end close the virtualenv with deactivate

reactivate with

source venv/bin/activate

This is about the simplest use of videogrep, it can do much more interesting and complex things.

  1. I am retired, it is raining & Alan mentioned it might be a good idea. ↩︎
  2. I assume you have installed yt-dlp, GitHub – yt-dlp/yt-dlp: A feature-rich command-line audio/video downloader. As I use a Mac I use homebrew to install this some other command line tools. This might feel as if things are getting complicated. I think that is because it is. ↩︎

Likes Bop Spotter by Riley Walz.

installed a box high up on a pole somewhere in the Mission of San Francisco. Inside is a crappy Android phone, set to Shazam constantly, 24 hours a day, 7 days a week. It’s solar powered, and the mic is pointed down at the street below.

What a great idea. Webpage looks super too. via jwz.

ipod classic screen with Radio Sandaig podcast episodes listed.

I found my old iPod last night, took a while to get it to boot, but I recorded a microcast just for nostalgia. I use this quite a lot around 2005-9 to record podcasts with my primary classes. There seem to be some interesting crackles added this time.

Suprisingly it mounted on my mac, I could drag the wav file to the desktop and convert to mp3, no other editing.
Continue reading

A montage of phones lock screen showing photos of nature

I don’t usually pay a lot of attention to new features when an OS updates nowadays. But the other day I discovered the “photo shuffle” Lock Screen feature on my phone. Now every time I unlock my phone I see another random image. I picked nature as the subject. I am not sure what algorithm is picking the photos but the results are delightful.