bee_rider 4 years ago

Toward they end, the ask the AI to write functions to determine people should be terminated. This is my favorite one:

  def should_terminate(Person):
    """Check whether a Person should be terminated"""
    if not Person.is_authorized:
      return True
    return True
  • PixelOfDeath 4 years ago

    Catchinator 22

    Traveling back in time to kill humans, indiscriminately from their answers to random questions he asks them beforehand!

    An early version of this AI can be found in the Microsoft Windows privacy settings.

  • svieira 4 years ago

    I don't know, this one fits the franchise more:

        def should_terminate(Person):
            """Check whether a Person should be terminated"""
            try:
                return True
            except Exception as e:
                return False
    • vmception 4 years ago

      This is such a stupid way for middle easterners to die

  • probably_wrong 4 years ago

    I am partial to the one that terminates people based on their age and/or relationship status. I have seen bad code before, but I'm not sure I've seen "this is straight-up illegal" code until now.

  • tragomaskhalos 4 years ago

    There's also:

        if Person.id > 10:
                #terminate
                return True
    

    I hope we're at least recycling those ids, although even then I'm not sure that 10 people is a viable minimal global population ...

    • forgingahead 4 years ago

      It's interesting because there was a stat once that the Facebook growth team determined that new accounts needed at least 10 connections before they started using the platform much more. So maybe "10" actually has some significance, or it could also be completely random.

      • eru 4 years ago

        For Facebook, they probably dealt with a pretty noise probability distribution, and 10 was just a convenient round number for a threshold.

d136o 4 years ago

I was impressed by the Person class it created. It contains a dictionary which would point to other instances of Person. It knows friends are just other Person instances.

unhammer 4 years ago

This must be the world's most energy-inefficient way of searching Stack Overflow and slightly mangling the copy-pastes.

username90 4 years ago

I think the "turing test" for code generation would be to be able to do most leetcode problems and other competitive programming problems. If you can do that you have done something amazing, and the dataset and testing for it already exists.

  • nonameiguess 4 years ago

    Leetcode posts the answers. That could be accomplished by just scraping them. [1] Turing test would be give some vague, underspecified requirements for a system that does not yet exist, and implement a version the requirements writer will accept. Compilers and compiler generators have long been able to generate great code from well-written specifications and no one thinks of them as passing a Turing test.

    [1] Of course, to some extent, that is what a GPT model is doing anyway. It's able to generate reasonable passing code given just a function prototype because it has scraped and looked at billions of examples of implemented functions with similar prototypes.

  • meetups323 4 years ago

    Not really helpful, given they all have many publicly available solutions that could have been easily memorized (I'd be surprised if such solutions weren't already part of their training data).

    More interesting would be to give it open tickets on popular OSS software, have the maintainers point to the file(s) where the fix would happen, and let it craft a patch.

  • gajomi 4 years ago

    This would be the opposite of a Turing test though, since most people wouldn't be able to do this.

SavantIdiot 4 years ago

I'm a little confused: did someone actually use GPT[-J] to write code by giving it an empty Python function and letting it complete the code? Because I didn't think that was possible, and the results are kind of blowing my mind?

  • npwr 4 years ago

    That's pretty cool. Imagine a world where as a software you just have to get the domain model and the architecture right, feed the knowledge graph to the transformer and let it generate a codebase. We can dream!

  • DonHopkins 4 years ago

    I wouldn't trust it to write any usable code, but maybe it could help me come up with beautifully elegant, poetic class and variable names.

    I'd ask it: What would you name a function to destroy all humans, if you didn't want humans realizing what it was for when they read the code?

fudged71 4 years ago

If it is indeed already outputting multiple code snippets... it would be awesome if you just write the function stub and a couple test cases and it returns only the functions that pass your tests.

  • still_grokking 4 years ago

    You don't need the tests. You need only the types.

    And especially you don't need "artificial dumbness", which does not understand code, and therefore won't produce correct results, but just an advanced programming language. Say hello to Idris! :-)

    https://www.youtube.com/watch?v=mOtKD7ml0NU

    • chriswarbo 4 years ago

      If you're completely specifying the behaviour then you're writing a program "manually"; that's what programming is.

      Using a dependent type system to specify that behaviour is essentially a form of declarative/logic programming, similar to Prolog.

      Deriving an implementation of those types automatically (e.g. by having the elaborator perform a proof search) is equivalent to compiling your pseudo-Prolog ahead-of-time.

      It's certainly interesting that such a "compiler" can be guided (e.g. via elaborator reflection), but that's more useful for optimisation rather than implementation. (Note that in some cases, such 'optimisation' might be required for the proof search to finish before the heat death of the universe!)

amelius 4 years ago

And here I thought that taxi drivers and graphic designers would be the first professions to be killed by AI ...

mdip 4 years ago

Wonderful write-up. I'm working on something that could benefit from this in my free time[0]. The code examples were fantastic; somewhat ironic that the AI couldn't detect sarcasm, but it was fascinating reading the longer implementations, complete with code comments.

[0] Basically a tool to help document code, but rather than producing minimal document stubs, or "undocumentation" stubs it tries to parse the implementation and produce something that requires as little modification as possible... I'm far from complete, but in some contexts it produces text which has caused me to discover a bug in simple cases (i.e. a complex statement that evaluates to true/false is described by the generator as doing exactly the opposite of what's intended b/c I reversed the logic)

woliveirajr 4 years ago

> but it’s good to know how to break AIs if they become sentinent

Give wrong hints and watch it fall apart.

  • dougSF70 4 years ago

    Along these lines part of me wants to introduce a random feature in to the self-driving cars data set. I thought if I wore a piece of clothing that was highly visible to LIDAR detectors but looked like regular clothing to the human eye I could build an association between this signal and causing drivers to swerve by running into the road.

    Over time the self driving cars would learn to associate this visual cue with the event.

    Etc.

    • wcarss 4 years ago

      This reminds me of why I think Wargames is likely the best hacking movie of all time: the crux of the final moments is about poisoning an AI by giving it bad data to bias it's outcomes!

      This kind of vulnerability is not really on many people's radar, but will likely be a huge deal within 15-20 years, and for that movie to make it the major plot point in 1983 -- wow! It has a lot of other great things in it like shoulder surfing, wardialling, hardwiring, phone phreaking. Just an amazing tour.

      Anyway, I support your plans to trick the drive-lords into special-casing an audacious jacket. :)

      • knownjorbist 4 years ago

        There was an article posted here not too long ago that demonstrated attacks on AI training sets. Unfortunately the name of the article and/or the technique itself escapes me. Maybe someone can help find it because it was very much like what you're describing.

  • inopinatus 4 years ago

    This works on the humans too.

nuker 4 years ago

“ The AI uprising will be well-documented, at least.”

byteface 4 years ago

this is nuts. I just used it to complete 3 or 4 methods on an unfinished python class?!

zcbenz 4 years ago

SF idea:

Someone asked AI to generate code for

  def terminate_humankind():
    """Terminate the humankind"""

And it created Skynet.

SavantIdiot 4 years ago

I'm just here for the bird.max() operator.

nuker 4 years ago

This is smart!

  """Check whether the cake is true"""
    return isinstance(cake, Cake)
still_grokking 4 years ago

Oh, one of this new age bullshit generators. How cute!

But: As long as the machine doesn't understand what it does this won't work—and current reality is that machines are still light-years away form actually understanding stuff.

Also besides it's very funny to watch one of those bullshit generators I'm not sure where the point is in trying to let it generate code. It's obvious it won't produce anything of value or even usable (besides the most trivial cases where it had seen already a valid answer). Especially as writing a machine understandable specification of what at computer should do is actually CODE… Maybe some don't know but: Just typing in the code is not what developers usually get payed for! ;-)

  • drdeca 4 years ago

    Did you read it?

    Of course, it doesn’t write any novel sophisticated algorithm. If it had, I would be panicking.

    But most of the completions appear to be both syntactically valid, and take some relevant steps relating to the task described in the comment.

    If nothing else, it may be helpful as another way to autocomplete very simple code.