kfsone an hour ago

Minor nit: In familiarity, you gloss over the fact that it's character rather than token based which might be worth a shout out:

"Microgpt's larger cousins using building blocks called tokens representing one or more letters. That's hard to reason about, but essential for building sentences and conversations.

"So we'll just deal with spelling names using the English alphabet. That gives us 26 tokens, one for each letter."

  • b44 an hour ago

    hm. the way i see things, characters are the natural/obvious building blocks and tokenization is just an improvement on that. i do mention chatgpt et al. use tokens in the last q&a dropdown, though

msla 38 minutes ago

About how many training steps are required to get good output?

  • b44 30 minutes ago

    not many. diminishing returns start before 1000 and past that you should just add a second/third layer