This concise article describes K2 (version 2 of the K language), and, at the level it describes, little has changed, so it is still a good introduction to the language 12 years later.
However, beyond the basics, a lot has changed since (current commercial version is K4, current dev is K6 AFAIK): Dictionaries have become better integrated in the language, the database layer was rolled into the language, the integrated (bare bones and milspec-looking but ultra effective) electric GUI was dropped, http and web (as both client and server) were added to the core, 64-bit integers, nanosecond timestamps, GUIDs were added as core types, and probably a few more things I forgot.
If you find this article interesting, you may want to experiment with JohnEarnest's ok [0], which provides graphical playground (implemented in JS and runs everywhere) and also read Q for mortals[1] - Q is a syntactic-sugar version of K4, though still the same language underneath.
The K2 GUI was very handy for programmers, for rapid prototyping, but it is impractical to use it for making visually polished UIs you would give to a customer.
If a feature of K doesn't pull its weight, it is removed.
If you want to try out K, you can either go to Kx and download the 32bit version of kdb+. \ at the prompt (on its own) will enter K mode. This uses K4.
Alternatively, there's the free and open source Kona, which is an implementation of K3, the preceding version.
Arthur Whitney seems to have been developing this stuff pretty steadily. There's a sparse-but-quite-informative website at http://kparc.com/ where you could follow the development of the K5 and K6 dialects. It's been pretty quiet the last year or so, but there was a comment on HN a month or two back which suggests that K7 could be on its way.
For playing around with the basic constructs, https://github.com/JohnEarnest/ok is nice (and itself pleasingly concise). Currently targeting the K6 dialect, I think. It doesn't really have any of the "database" side of things, though.
Does anyone know of a good description of which parts of K's implementation makes it so fast? I have heard the interpreter itself is quite small, and easily fits in L1 CPU cache, which helps of course. Are the primitives further implemented using vector instructions or multi-threading? Does the K interpreter pattern-recognize compositions of constructs and dispatch to an optimized implementation, like with APL "idioms"?
> I have heard the interpreter itself is quite small which helps of course
It helps a lot more than a lot: Small is everything.
"Main memory" is something like 1000x slower than "L1 CPU cache", so if your whole program lives in L1 you only pay to receive data, which streams in as fast as main memory can. How can you possibly go any faster than that?
The interpreter looks a lot like this[1], scan a character, do something with it. There's no scanning phase, or AST, or very much analysis at all. Being simple helps keep it small. Writing dense makes it easy to see (without scrolling) similar code which can be refactored into less space. This is how small (dense) source code can also help make small object code. This is how small (dense) source code is fast!
[1]: nsl.com/papers/origins.htm
Once you've done all that, vector instructions and multi-threading can help eke out a little bit more speed. Recognising a couple characters at a time and treating them specially can sometimes help as well, but it also can cost a lot of object size quickly, so there needs to be some balance.
You can "go faster" than memory bandwidth by accessing memory more intelligently (that is, by reducing the amount of memory traffic, possibly by improving cache behaviour). The classic example is loop tiling for matrix multiplication. Here you write significantly more code, but reduce accesses to main memory by a good constant factor. Sadly, with modern architectures, just keeping things small and simple is rarely the way to peak performance. My experience from doing high-performance programming (mostly on GPUs) is that you have a very generous budget when it comes to how large your code can be, but you have an extremely tight budget when it comes to how large your data is. (Of course, in absolute terms you access significantly more data than code, but you generally mostly worry about data size, not so much code size.)
As a user (IANAP), I think avoiding I/O contributes significantly.
The design paradigm of k is to memory map the data and then "forget about the filesystem"; transform data in memory, instead of reading and writing "files".
The slowest part of working in k, for me, is the process of reading files into memory. And I run kernel and filesystem entirely in RAM (mfs and tmpfs); no disk is involved.
If there are other software authors who also adopted this paradigm of avoiding I/O, I would like to know of them.
I probably used the wrong terminology here. What I meant really is avoiding (minimizing) disk I/O. (Others have already pointed out how k avoids I/O with main memory by staying resident in a CPU cache.)
Ideally I would prefer a system with sufficient RAM, without the need for "short-term" disk storage and thus without the need for virtual memory. Because today, I have the RAM I need. I have no requirement for short-term disk storage.
But like the rest of us, I use kernels that date back to a different era. These are kernels that assume insufficient RAM, the presence of a disk and the need for a "swap device" to do work. Disk I/O is a major part of that paradigm and has been embraced by software authors.
To me, disk is slow. Even SSD. I seek a different paradigm, not one still focused on secondary storage. k is the closest thing I have found.
I wish there was a way to use this at my company without the monstrous price tag. I also wish kdb+ came with much better dashboard support kinda like Tableau...I want to retrieve and mold my data quickly with kdb+'s query language and once it is in the proper form, play with several chart types.
Very difficult with large IT departments to get something unusual like this since you need hardware, a way to transfer data from your main database into an SSD to really take advantage of kdb+. Doing that would step on the toes of several departments and would require a large project to happen. Then you have the problem where many people aren't array language savvy. If kOS had a better way to build real time dashboards I would pay more attention. I really wish I could get a personal setup at work to do this (on the cheap), but classic IT really hates things outside their cotrol.
The author of that article has a very biased view tho'. An organisation that finds a lot of shadow IT happening probably ought to replace the head of the IT department since he or she is clearly not delivering what the users actually need.
From the link: Shadow IT can act as a brake on the adoption of new technology
IT should exist to help the domain users, but after a point they get too big and start to dictate what gets used even if nobody wants it. Like choosing the data analytics technology for the data analytics team when it is what they don't want/need. So then your data analytics team eventually goes outside budget to get what they actually wanted and you have two products. Very dysfunctional and common.
My IT department got suuuuuper confused just by the "db" in "kdb". It's a database! That means it must have roles and schemas and basically be a clone of Oracle! Here, fill out these 5,000 pages of paperwork!
The PL is a little different (very similar to Haskell), interoperation with C/C++ is much more direct, and it's also suitable for low-latency applications (where the boxing overhead in K would be unsuitable).
Yea it's actually the most readable in the family to those who haven't put the effort into learning the array paradigm. Dyalog APL & J are the two other main languages in the family that have the largest user bases. Both are pretty awesome.
Q'Nial (https://github.com/danlm/QNial7), although it seems pretty much an abandoned language. Searching for it on github yields like, 3 repos.
http://www.nial.com/ is the official website, which also seems abandoned.
Readability is in the eye of the beholder! (and no, I’m not a K expert — but try working through a few problems and it does gradually start making sense)
If you want a slightly more “English like” language which still gives you the array stuff, Q is built on top of K4 and might fit the bill
This syntax is very readable. The maximum semantic size of a function that's still easy to read is much larger. Functions that do entire things in one go are more common, and make reading about the entire program much easier - the game of life in one line is idiomatic and elegant APL, but it would be an inelegant mess in other languages.
> The maximum semantic size of a function that's still easy to read is much larger.
This seems like a really weird thing to optimise for - would you mind saying more? I've never found myself thinking "man, I wish my function could _do more_".
Are these idiomatic perl 5 programmers, or programmers converting it to their language of choice? And if the second, would it have been any easier going from Lisp or Haskell to Python, Go or whatever?
Point being that translating from unfamiliar syntax to familiar is always a challenge.
I use k3 at work every day. It's perfectly readable to me.
Every summer we spend about a week training interns with K, and they proceed to read, modify and write real production code with it during their tenure. Interns come from a mix of backgrounds- computer science, various engineering degrees, etc. This seems compelling evidence that K is learnable.
If you find K unreadable, consider spending a week or two seriously working with it. You might be surprised how you feel afterwards.
> I use k3 at work every day. It's perfectly readable to me.
Right, but the one follows from the other.
I'm passingly familiar with K, Q, and a+. I've worked with Borror and I kind of own an a+ compiler. I think if you define "readable" to mean "you can learn it with two weeks intensive tuition" the fine, yeah, it's readable. But to me, lightning-fast speed doesn't make up for terse-to-the-point-of-dumb syntax.
+/&~&/(!1000)!/:3 5
Reads like a cruel joke to me. It's unreadable and it breaks everything we know about writing good software, and to pretend otherwise kinda comes across as macho BS. Sorry. I don't know why developers in Whitney's programming languages all have this kind of snow-blindness to the clear failings of the language. I don't know why number of characters of code is the thing we're optimising for in 2018. It's not like you're firing bytes over a 56k modem.
Regex is the similarly terse, and many popular languages make use of it. Do you replace regex syntax with much more verbose function calls? Also, several posts above mention how the interpreter fits inside the CPU L1 cache for maximum performance. I'm guessing the terse syntax helps keep it small enough.
> Do you replace regex syntax with much more verbose function calls?
No, but I also don't find myself wishing the entire project read like regex does. "If only our entire suite of enterprise software was made up of regexes..."
It doesn't seem like there's much I could say to change your mind, but there's no "macho BS" behind my preferences. I wrote quite a bit of Forth before learning K, so maybe my sense of aesthetics had already been drawn a bit outside the mainstream?
Some time ago I built an environment for livecoding graphics experiments with K, and the concision of the syntax makes it delightfully easy to experiment and make changes as I build up programs piece by piece. Here's a recording I made which illustrates all the intermediate steps of a simple way to draw a Voronoi diagram:
K notation is a powerful medium of communication when I'm whiteboarding ideas with coworkers, scribbling on a napkin, or tapping an idea into my phone on the subway. I don't write K because I think doing so paints me as some sort of Ubermensch, I write K because I find it crisp and beautiful; An elegant K solution to a thorny problem is poetically satisfying!
I use k/q on and off and after a couple of weeks away, you basically need to relearn it. The massive advantage of R and Python is even after time away, it’s still perfectly legible.
There's a maximum amount of stuff a single function/code unit can do before becoming unreadable. When writing code, there's a tension between the semantic size of the implementation, and the semantic size of the constituent parts of the problem: very few problems are broken down into clean chunks of code. Quite often, when trying to subjugate the problem to the language, you end up with lots of 'features' of the modular parts: lots of parameters, parameters being complex non-data objects, and so forth.
Breaking stuff down costs, as well - you have to make it clear how something is going to be used. You might write a helper function for one bit in particular, and then rely on additional levels of abstraction to make it clear how this function should be used.
Instead, if you can allow your functions to become semantically larger, you don't need to explain and protect your modularisation. Helper functions with one use are inlined (it's very common that these things have names longer than their definition). Instead of a large tower of abstraction, you just have two levels: the level of the data, and the level of the problem. Your entire program is written as functions, each of which does something in the problem domain, and for each, each definition is immediately clear about what it is doing with the data.
There was an article here, a while ago, called 'smaller code, better code', and with accompanying comments, that explain this better than I can.
It's worth the learning curve. I can't count the number of times some horrible multithousand-line pandas contortion has been replaced by a dozen lines of Q. After working in Q for awhile, you go look at someone's R code or Pandas code or whatever and you're like, "Shut up and get to the point!"
I still use k2 in addition to k4. Being able to see which things change and which stay the same between versions helps in the slow, thorough type of learning process. (I am not a fast learner. I am a thorough learner.) I would love to get a copy of k5 or greater but I am not a programmer by trade and doubt I would pass the audition.
Although I stopped using Windows, X11 and Mac a long time ago, I have tried k2 on Windows and it is extremley fast, irrespective of whether the computer is old and "underpowered". I used it to instantaneously generate plots and charts from the command line, instead of driving Excel with some scripting language. This is software from the late 1990s early 2000s.
RosettaCode wiki needs to fix its k entries or turn off the silly Cloudflare email protection or whatever is causing the problem. The "@" symbol and subsequent characters are being replaced by "[email protected]".
This concise article describes K2 (version 2 of the K language), and, at the level it describes, little has changed, so it is still a good introduction to the language 12 years later.
However, beyond the basics, a lot has changed since (current commercial version is K4, current dev is K6 AFAIK): Dictionaries have become better integrated in the language, the database layer was rolled into the language, the integrated (bare bones and milspec-looking but ultra effective) electric GUI was dropped, http and web (as both client and server) were added to the core, 64-bit integers, nanosecond timestamps, GUIDs were added as core types, and probably a few more things I forgot.
If you find this article interesting, you may want to experiment with JohnEarnest's ok [0], which provides graphical playground (implemented in JS and runs everywhere) and also read Q for mortals[1] - Q is a syntactic-sugar version of K4, though still the same language underneath.
[0] https://github.com/JohnEarnest/ok
[1] http://code.kx.com/q4m3/
Why remove the GUI stuff?
No commercial demand.
The K2 GUI was very handy for programmers, for rapid prototyping, but it is impractical to use it for making visually polished UIs you would give to a customer.
If a feature of K doesn't pull its weight, it is removed.
Gotcha...I would assume to leave it in for use in interactive data analysis, but if you REALLY want to keep things small...
Arthur starts over completely from scratch with each major release, and abhors drawing in dependencies. It's a good strategy for traveling light.
If you want to try out K, you can either go to Kx and download the 32bit version of kdb+. \ at the prompt (on its own) will enter K mode. This uses K4.
Alternatively, there's the free and open source Kona, which is an implementation of K3, the preceding version.
https://kx.com/download
https://github.com/kevinlawler/kona
I think my favorite way of trying K is John Earnest’s JS-based interpreter Ok:
http://johnearnest.github.io/ok/index.html
Arthur Whitney seems to have been developing this stuff pretty steadily. There's a sparse-but-quite-informative website at http://kparc.com/ where you could follow the development of the K5 and K6 dialects. It's been pretty quiet the last year or so, but there was a comment on HN a month or two back which suggests that K7 could be on its way.
For playing around with the basic constructs, https://github.com/JohnEarnest/ok is nice (and itself pleasingly concise). Currently targeting the K6 dialect, I think. It doesn't really have any of the "database" side of things, though.
Any news on kOS? I'd love to build a barebones high performance computer with K running on bare metal.
Nothing I’ve seen recently. Seemed to be tied to K5, and there’s been at least one re-write since then.
Pretty sure the idea is done; Art's working on other things at the moment.
Does anyone know of a good description of which parts of K's implementation makes it so fast? I have heard the interpreter itself is quite small, and easily fits in L1 CPU cache, which helps of course. Are the primitives further implemented using vector instructions or multi-threading? Does the K interpreter pattern-recognize compositions of constructs and dispatch to an optimized implementation, like with APL "idioms"?
No vector instructions at present (I think an older version may have done).
> I have heard the interpreter itself is quite small which helps of course
It helps a lot more than a lot: Small is everything.
"Main memory" is something like 1000x slower than "L1 CPU cache", so if your whole program lives in L1 you only pay to receive data, which streams in as fast as main memory can. How can you possibly go any faster than that?
The interpreter looks a lot like this[1], scan a character, do something with it. There's no scanning phase, or AST, or very much analysis at all. Being simple helps keep it small. Writing dense makes it easy to see (without scrolling) similar code which can be refactored into less space. This is how small (dense) source code can also help make small object code. This is how small (dense) source code is fast!
[1]: nsl.com/papers/origins.htm
Once you've done all that, vector instructions and multi-threading can help eke out a little bit more speed. Recognising a couple characters at a time and treating them specially can sometimes help as well, but it also can cost a lot of object size quickly, so there needs to be some balance.
You can "go faster" than memory bandwidth by accessing memory more intelligently (that is, by reducing the amount of memory traffic, possibly by improving cache behaviour). The classic example is loop tiling for matrix multiplication. Here you write significantly more code, but reduce accesses to main memory by a good constant factor. Sadly, with modern architectures, just keeping things small and simple is rarely the way to peak performance. My experience from doing high-performance programming (mostly on GPUs) is that you have a very generous budget when it comes to how large your code can be, but you have an extremely tight budget when it comes to how large your data is. (Of course, in absolute terms you access significantly more data than code, but you generally mostly worry about data size, not so much code size.)
> which parts of K's implementation makes it so fast?
the missing bullshit parts :)
also simplicity and good memory access patterns
> vector instructions
if you write simple loops in c, compilers are good at generating those
> Does the K interpreter pattern-recognize
afaik some, like *| but fewer than dyalog
As a user (IANAP), I think avoiding I/O contributes significantly.
The design paradigm of k is to memory map the data and then "forget about the filesystem"; transform data in memory, instead of reading and writing "files".
The slowest part of working in k, for me, is the process of reading files into memory. And I run kernel and filesystem entirely in RAM (mfs and tmpfs); no disk is involved.
If there are other software authors who also adopted this paradigm of avoiding I/O, I would like to know of them.
I probably used the wrong terminology here. What I meant really is avoiding (minimizing) disk I/O. (Others have already pointed out how k avoids I/O with main memory by staying resident in a CPU cache.)
Ideally I would prefer a system with sufficient RAM, without the need for "short-term" disk storage and thus without the need for virtual memory. Because today, I have the RAM I need. I have no requirement for short-term disk storage.
But like the rest of us, I use kernels that date back to a different era. These are kernels that assume insufficient RAM, the presence of a disk and the need for a "swap device" to do work. Disk I/O is a major part of that paradigm and has been embraced by software authors.
To me, disk is slow. Even SSD. I seek a different paradigm, not one still focused on secondary storage. k is the closest thing I have found.
I wish there was a way to use this at my company without the monstrous price tag. I also wish kdb+ came with much better dashboard support kinda like Tableau...I want to retrieve and mold my data quickly with kdb+'s query language and once it is in the proper form, play with several chart types.
The 32-bit edition is free
Can't be used commercially.
You can build a PoC/argument for why your organization should use it commercially
Very difficult with large IT departments to get something unusual like this since you need hardware, a way to transfer data from your main database into an SSD to really take advantage of kdb+. Doing that would step on the toes of several departments and would require a large project to happen. Then you have the problem where many people aren't array language savvy. If kOS had a better way to build real time dashboards I would pay more attention. I really wish I could get a personal setup at work to do this (on the cheap), but classic IT really hates things outside their cotrol.
classic IT really hates things outside their cotrol.
https://en.wikipedia.org/wiki/Shadow_IT
The author of that article has a very biased view tho'. An organisation that finds a lot of shadow IT happening probably ought to replace the head of the IT department since he or she is clearly not delivering what the users actually need.
From the link: Shadow IT can act as a brake on the adoption of new technology
LOL!!!
IT should exist to help the domain users, but after a point they get too big and start to dictate what gets used even if nobody wants it. Like choosing the data analytics technology for the data analytics team when it is what they don't want/need. So then your data analytics team eventually goes outside budget to get what they actually wanted and you have two products. Very dysfunctional and common.
My IT department got suuuuuper confused just by the "db" in "kdb". It's a database! That means it must have roles and schemas and basically be a clone of Oracle! Here, fill out these 5,000 pages of paperwork!
You might be interested in hobbes, it's free and open source, and we use it with some very large/complex data sets:
https://github.com/Morgan-Stanley/hobbes
The PL is a little different (very similar to Haskell), interoperation with C/C++ is much more direct, and it's also suitable for low-latency applications (where the boxing overhead in K would be unsuitable).
Try Jd; it's almost as good and much cheaper.
http://code.jsoftware.com/wiki/Jd/Index
Is there a variant of this family of language that has readable syntax?
Yea it's actually the most readable in the family to those who haven't put the effort into learning the array paradigm. Dyalog APL & J are the two other main languages in the family that have the largest user bases. Both are pretty awesome.
+1 to J. It is open source, has lots of libraries, documentation, examples, and runs on most mainstream desktop platforms.
Q is a (semi-)readable variant. You can more or less directly translate K symbols to English words. Here's some code for PE question 1:
K
Q
Q'Nial (https://github.com/danlm/QNial7), although it seems pretty much an abandoned language. Searching for it on github yields like, 3 repos. http://www.nial.com/ is the official website, which also seems abandoned.
Q'Nial was just open sourced a year or so ago; not really an abandoned language.
Readability is in the eye of the beholder! (and no, I’m not a K expert — but try working through a few problems and it does gradually start making sense)
If you want a slightly more “English like” language which still gives you the array stuff, Q is built on top of K4 and might fit the bill
http://code.kx.com/q4m3/
This syntax is very readable. The maximum semantic size of a function that's still easy to read is much larger. Functions that do entire things in one go are more common, and make reading about the entire program much easier - the game of life in one line is idiomatic and elegant APL, but it would be an inelegant mess in other languages.
> The maximum semantic size of a function that's still easy to read is much larger.
This seems like a really weird thing to optimise for - would you mind saying more? I've never found myself thinking "man, I wish my function could _do more_".
> This syntax is very readable.
Objectively, by any measure, this isn't true.
Not by the measure which matters most: the programmers using it.
Readability is relative. It's a category error to speak of it without saying who the reader is. Is Sanskrit readable?
> the programmers using it
Completely disagree, and I think the people spending the 2010's clearing up screeds of idiomatic perl 5 would side with me :)
Are these idiomatic perl 5 programmers, or programmers converting it to their language of choice? And if the second, would it have been any easier going from Lisp or Haskell to Python, Go or whatever?
Point being that translating from unfamiliar syntax to familiar is always a challenge.
I use k3 at work every day. It's perfectly readable to me.
Every summer we spend about a week training interns with K, and they proceed to read, modify and write real production code with it during their tenure. Interns come from a mix of backgrounds- computer science, various engineering degrees, etc. This seems compelling evidence that K is learnable.
If you find K unreadable, consider spending a week or two seriously working with it. You might be surprised how you feel afterwards.
> I use k3 at work every day. It's perfectly readable to me.
Right, but the one follows from the other.
I'm passingly familiar with K, Q, and a+. I've worked with Borror and I kind of own an a+ compiler. I think if you define "readable" to mean "you can learn it with two weeks intensive tuition" the fine, yeah, it's readable. But to me, lightning-fast speed doesn't make up for terse-to-the-point-of-dumb syntax.
Reads like a cruel joke to me. It's unreadable and it breaks everything we know about writing good software, and to pretend otherwise kinda comes across as macho BS. Sorry. I don't know why developers in Whitney's programming languages all have this kind of snow-blindness to the clear failings of the language. I don't know why number of characters of code is the thing we're optimising for in 2018. It's not like you're firing bytes over a 56k modem.
Hi Sam, good to run into you here! :D
Regex is the similarly terse, and many popular languages make use of it. Do you replace regex syntax with much more verbose function calls? Also, several posts above mention how the interpreter fits inside the CPU L1 cache for maximum performance. I'm guessing the terse syntax helps keep it small enough.
You know what's faster than having an interpreter for your program that fits entirely in L1 cache? Not having the interpreter at all.
> Do you replace regex syntax with much more verbose function calls?
No, but I also don't find myself wishing the entire project read like regex does. "If only our entire suite of enterprise software was made up of regexes..."
It doesn't seem like there's much I could say to change your mind, but there's no "macho BS" behind my preferences. I wrote quite a bit of Forth before learning K, so maybe my sense of aesthetics had already been drawn a bit outside the mainstream?
Some time ago I built an environment for livecoding graphics experiments with K, and the concision of the syntax makes it delightfully easy to experiment and make changes as I build up programs piece by piece. Here's a recording I made which illustrates all the intermediate steps of a simple way to draw a Voronoi diagram:
https://i.imgur.com/YZkYyDs.gif
K notation is a powerful medium of communication when I'm whiteboarding ideas with coworkers, scribbling on a napkin, or tapping an idea into my phone on the subway. I don't write K because I think doing so paints me as some sort of Ubermensch, I write K because I find it crisp and beautiful; An elegant K solution to a thorny problem is poetically satisfying!
Wait, you find two weeks to be an unacceptably long time time to learn an entire programming language?
every day
I use k/q on and off and after a couple of weeks away, you basically need to relearn it. The massive advantage of R and Python is even after time away, it’s still perfectly legible.
There's a maximum amount of stuff a single function/code unit can do before becoming unreadable. When writing code, there's a tension between the semantic size of the implementation, and the semantic size of the constituent parts of the problem: very few problems are broken down into clean chunks of code. Quite often, when trying to subjugate the problem to the language, you end up with lots of 'features' of the modular parts: lots of parameters, parameters being complex non-data objects, and so forth.
Breaking stuff down costs, as well - you have to make it clear how something is going to be used. You might write a helper function for one bit in particular, and then rely on additional levels of abstraction to make it clear how this function should be used.
Instead, if you can allow your functions to become semantically larger, you don't need to explain and protect your modularisation. Helper functions with one use are inlined (it's very common that these things have names longer than their definition). Instead of a large tower of abstraction, you just have two levels: the level of the data, and the level of the problem. Your entire program is written as functions, each of which does something in the problem domain, and for each, each definition is immediately clear about what it is doing with the data.
There was an article here, a while ago, called 'smaller code, better code', and with accompanying comments, that explain this better than I can.
Thanks for taking the time, it's interesting to see another perspective.
It's worth the learning curve. I can't count the number of times some horrible multithousand-line pandas contortion has been replaced by a dozen lines of Q. After working in Q for awhile, you go look at someone's R code or Pandas code or whatever and you're like, "Shut up and get to the point!"
I still use k2 in addition to k4. Being able to see which things change and which stay the same between versions helps in the slow, thorough type of learning process. (I am not a fast learner. I am a thorough learner.) I would love to get a copy of k5 or greater but I am not a programmer by trade and doubt I would pass the audition.
Although I stopped using Windows, X11 and Mac a long time ago, I have tried k2 on Windows and it is extremley fast, irrespective of whether the computer is old and "underpowered". I used it to instantaneously generate plots and charts from the command line, instead of driving Excel with some scripting language. This is software from the late 1990s early 2000s.
RosettaCode wiki needs to fix its k entries or turn off the silly Cloudflare email protection or whatever is causing the problem. The "@" symbol and subsequent characters are being replaced by "[email protected]".
https://rosettcode.org/wiki/Category:K
k4 can be integrated into daily use in UNIX-like OS for text processing.
Professional example: https://github.com/adavies42/qist/blob/master/lib/awq.k
Amateur example: (remove all duplicate lines)
Experiment: keep increasing the size of the file passed as $1 until the AWK version fails.
https://github.com/JohnEarnest/ok/blob/gh-pages/docs/Fromk5T...:
Padding Can Truncate (Done)
In k5, padding was treated as a minimum size for the output string:
In k6, as in older Ks, padding will produce a string with an exact length, truncating if necessary:
Is this correct?