Aidapal Space

This is a space to try out the Aidapal model, which attempts to infer a function name, comment/description, and suitable variable names, when given the output of Hex-Rays decompiler output of a function. More information is available in this blog post.

TODO / Issues

  • We currently use transformers which de-quantizes the gguf. This is easy but inefficient. Can we use llama.cpp or Ollama with zerogpu?
  • Model returns the markdown json prefix often. Is this something I am doing wrong? Currently we remove it in present to enable JSON parsing.
Examples
Pages:
...