Category: AI

  • Learning chords with ChatGPT voice conversations

    Asked ChatGPT voice* to quiz me on chords while I was walking around the neighbourhood.

    Voice recognition was good, response time was good. I didn’t love the “connecting…” stage to kick of the chat. And had bit of a buggy launch, but once started it was solid.

    Initially it would ask a single question, I would answer, then it would say “do you want to continue?”. I asked it to skip that and just keep going until I left, which it did remember to do.

    It kept pronouncing Bb as “beebee”, and Eb as “Ebb”, and wouldn’t remember to correct. I could maybe have solved this with initial prompt – either “you are a music teacher” or “always pronounce sharp and flat correctly”.

    Overall this launching from “please quiz me on chords for different keys”** was very impressive.

    This part later when I put my phone on the table and left the room was a little unsettling. Apparently the sound of my door closing said “This is Deokyoung Lee from MBC News” in Korean.

     

     

    *Enabled in beta features of app.

    **actual initial prompt: “Can you quiz me on notes in piano chords in different keys? Either the key name and I’ll name the notes.”

     

  • Will LLMs worsen link rot?

    Link rot is bad enough already, probably going to get worse if “sources” are added automatically.

    Example Conversation

    “Comparing Animal Strengths in Relation to Humans” linked to “Influence of the Hohenwarte reservoir on tilt and strain observations at Moxa”
    “Human vs. Animal Physical Capabilities” was a 404 on oxford academic.

    Since it’s prone to hallucinating external information, there seems to be a tendency to either get links wrong, or make them up entirely. In some non-web-enabled ones it would claim to read a link provided, and tries to guess the content based on link text.

  • Generate app implementation details with ChatGPT

    Conversation

    My first question was far too general, so got me some generic unhelpful answers.

    Asking for 20 ideas (inspired by Brainstorming section from this talk) and  including specifics got better specific answers.

    Conversation helps refine requirements. In this case I wanted to think of some different possible interfaces, and maybe technology suggestions for a specific part of it.

     

  • Mixxx MIDI Mapping with ChatGPT Code Interpreter

    Conversation

    Some success, still in progress.

    Lazy attempt to get a 2 deck controller to control 4 decks in Mixxx, like this example.

    Fun part about halfway through where it just started hammering through errors trying to debug itself. I think the uploaded file had expired by then (I had also switched computers). But “Show work” for the troubleshooting steps is still interesting to look through – things like not importing things before using them.

    In earlier attempt it didn’t know the actual MIDI values of the controller, so I started by uploading someone’s mapping and asking some things about that – it also had some features like pitch fader scaling that I wanted to understand. The initial extraction and explanation was pretty good, but then it got a bit lost when trying to make changes and actually implement things.

    But by the end it got a reasonably complete looking mapping JS file. But some issues with the actual mapping, and follow-up questions for fixing syntax errors were not so good, and it got a bit confused when I uploaded the updated files (it still referred to things from earlier versions of them). Maybe a good reason to start new chats for troubleshooting, though keeping the context of the previous chat would be useful.

    Might try “generate a summary of this chat to start a new one with context and the current versions of the files”.