I accidentally did more than 25 sessions before a scrapbook, so I need to do another to submit my last session (oops):
I updated my interpreted (but hopefully soon-to-be compiled) language, Tale, in many ways:
• I fixed a bunch of memory-related bugs; the language is incredibly stable now!
• I added tens of unit tests with hundreds of assertions using a simple framework implemented in Tale itself
• I implemented static methods and properties on classes
• I implemented imports that can gracefully handle circular imports, support native modules (like the standard library), and import files of other types
• I added a proper standard library with math functions, string manipulation, a seedable RNG, testing functions, and more
• I implemented dynamic class field access (I was debating this one, but it's really useful for things like arrays)
• I made various examples that showcase the language, like a brainfuck interpreter, a performant port of donut.c, a custom implementation of higher-order functions on arrays, and more
• I updated the readme with proper usage instructions, features, and examples
Overall, I'm extremely happy with the state of this project (even if I didn't ever get to a bytecode interpreter or real compiler). This is the highest-effort project I submitted to showcase, but it was unfortunately my first project eliminated :blob_sad: (That's okay, though! I had a great time learning Zig and understanding manual memory management with reference counting.)
github.com/Glitch752/TaleLanguage
I updated my interpreted (but hopefully soon-to-be compiled) language, Tale, in many ways:
• I fixed a bunch of memory-related bugs; the language is incredibly stable now!
• I added tens of unit tests with hundreds of assertions using a simple framework implemented in Tale itself
• I implemented static methods and properties on classes
• I implemented imports that can gracefully handle circular imports, support native modules (like the standard library), and import files of other types
• I added a proper standard library with math functions, string manipulation, a seedable RNG, testing functions, and more
• I implemented dynamic class field access (I was debating this one, but it's really useful for things like arrays)
• I made various examples that showcase the language, like a brainfuck interpreter, a performant port of donut.c, a custom implementation of higher-order functions on arrays, and more
• I updated the readme with proper usage instructions, features, and examples
Overall, I'm extremely happy with the state of this project (even if I didn't ever get to a bytecode interpreter or real compiler). This is the highest-effort project I submitted to showcase, but it was unfortunately my first project eliminated :blob_sad: (That's okay, though! I had a great time learning Zig and understanding manual memory management with reference counting.)
github.com/Glitch752/TaleLanguage
I continued working on my interpreted (but hopefully eventually compiled) language, Tale, which I've been making to learn Zig and memory management concepts like reference counting. I'm not sure when I last made a scrapbook, but I just hit a major milestone: class inheritance is working! I did a lot of refactoring to handle reference counting and passing values by reference better, and while it's not perfect, I'm really happy with how much more comfortable I am with manually managing memory (although that doesn't mean I'm good at handling complex situations yet).
github.com/Glitch752/TaleLanguage
:jam: I made a game about finding loopholes in a set of parking rules that gets stricter and stricter for the Arcade Jam! (2/2 since I forgot about the session limit, oops)
github.com/Glitch752/arcadeJam
:jam: I made a game about finding loopholes in a set of parking rules that gets stricter and stricter for the Arcade Jam! (1/2 since I forgot about the session limit, oops)
github.com/Glitch752/arcadeJam
I made a few final tweaks (assigning part numbers; changing out the resistors, LED, and button; adjusting some traces to meet JLCPCB's recommendations, etc.) and applied for the OnBoard grant for my LucidVR haptic glove control PCB manufactured and assembled through JLCPCB. I should have included this in my last "ship", but I forgot about the grant; oops. github.com/hackclub/OnBoard/pull/770, github.com/Glitch752/LucidGlovesPCB
I implemented a custom language lexer, parser, and tree-walking interpreter for a custom programming language to learn Zig: github.com/Glitch752/ZigCompiler
I want to eventually add more advanced syntax concepts like classes (and proper functions), then switch to an actual compiler as the repository name would imply. The eventual goal would be self-hosting the language; I've never made it far enough in my language projects to achieve that.
The current possible programs are minimal, but there's a pretty extensible internal API for defining native functions and behavior.
I'm quite happy with the performance as well! This 64-layer Sierpinski Triangle I showed can be generated in an invisibly short time. Not that it's super impressive, but for a first attempt using a notoriously slow strategy, I didn't expect very much.
The language includes relatively robust error reporting (both for runtime and during parsing) but is dynamically typed without static analysis yet.
While I plan to continue with this project at some point, I hit my goal of running a nontrivial program (while learning a lot about Zig!) and I would consider this a "shippable" state.
I built a customizable on-screen, GPU-accelerated, OCR application: github.com/Glitch752/OnScreenOCR
Here's a non-exhaustive list of the features:
• Fully GPU-accelerated rendering using wgpu
• Live preview of the OCR result
• Support for taking screenshots
• Support for multiple OCR languages
• Result fixing and reformatting
◦ Reformat to remove hyphens from end of lines, moving the word to fit entirely on the line
◦ Hopefully more in the future if I find any annoyances
• Ability to copy without newlines
• Ability to fine-tune Tesseract's parameters
◦ Ability to export in other Tesseract formats (TSV, Alto, HOCR)
• Support for non-rectangular selections
• Support for multiple monitors
• Keybinds for common actions (Ctrl+C to copy, Ctrl+Z to undo, arrows to move selection, etc.)
• Full undo/redo history
• Stays in system tray when closed
• Numerous intuitive selection-related interactions, including drawing outlines, shifting edges/vertices, removing edges/vertices, and more.
Since my last update, I did a few things:
• Worked on my desktop-transferred version of the raycaster to make it multithreaded and rendered a really nice 1080p image to finish the book. This is still the exact same raycasting code from the calculator but modified to run in one process per thread and merge the images in the end. I tried to mimic the ending scene they have in the book. This is the first image attached.
• Worked on my calculator version to make it progressively render images so I can pause it at any time and have a nice-looking image. I thought this would be simple -- store the accumulated color of every pixel in a list, add to it, and divide by the total samples each pass. However, the restricted environment I'm working with is starting to show. It turns out that Python code is only allowed an extremely small amount of memory -- around 20KB from my testing. This meant that, no matter how I stored the data (unless there's some magical way to losslessly and efficiently store a color per pixel that I'm unaware of), it ended up being a tradeoff of either rendering at full resolution and not using this new feature or rendering at quarter resolution. Overnight, I did a quarter-resolution render, and I'll probably go back and do a full-resolution one without the new progressive system. However, the quarter-resolution render still looks great! It intentionally has a pretty aggressive depth-of-field, so the blurred left and right spheres are expected. It's cool that stuff like this can be done on a calculator (and programmed on a calculator)! The second image is a screenshot through my fixed libnspire and the third is the same image on the calculator's screen.
I've been doing a scrapbook post for every hack session (and often doing work without a hack session), so I misunderstood the proper flow there... However, consider this my true "ship" of this project idea. I'll probably keep adding to it with concepts from the later books, but I also have some other projects I'm excited to work on!
Okay... this is cheating a bit, but I wanted to see how the scene would look if I spent a long time to properly render it out. I used my patched libnspire to download the tns file and extracted the contents. Besides changing out ti_draw to use PIL and increasing the resolution and sample count, this is the unmodified code from the calculator. I'll do a "real" render on the calculator overnight, but this is what the raytracer is capable of so far! I'm quite impressed with the versatility of the TI-Nspire's built-in Python runtime.
I finished my depth-of-field render! This is 100 samples -- much more than I've successfully done before (although it took 38 minutes...)
The focus plane was unfortunately outside of the range of the spheres, but it still looks interesting despite being blurry!Well... it's an understatement to say taking a screenshot was harder than I thought it would be.
I basically needed to reverse-engineer the handshake since my TI-Nspire calculator doesn't respond how libnspire expects it to.
After a couple hours of work (That I should have put in as arcade hours... whoops), I finally got a screenshot taken from my calculator without the paid student software! Here's a render taken directly from the calculator screen!
The camera position and field of view can be changed now. I spent a while rendering a nice-looking (albeit still 1/16 resolution and low sample count) image with a lower field of view, which shows the dielectric material nicely! After this, the only thing left to implement from Ray Tracing in One Weekend is depth of field (which they call defocus blur). Maybe I'll implement some of Ray Tracing: the Next Week after this. Some of it does not apply to my renderer, like bounding volume hierarchies since I can't load 3D models... unless I convert the 3D models to a Python file and transfer it.
It's difficult to see through a picture (and the render definitely needs more samples but I opted for a higher resolution this time), but dielectrics are working! the sphere on the left is a hollow glass sphere that bends light as one would expect!
I finished the schematic layout of my LucidVR haptic gloves PCB. I might add a few more buttons or status LEDs since there are some free GPIO pins.
Here's a ~30x speed video of most of a higher-resolution and higher-sample-count render. Since my last update, I implemented fuzzy reflections on metal (the left sphere is a very fuzzy reflection and the right has almost no fuzz) and started work on dielectrics (although there are none in this scene).
Here's a video of it rendering a scene with reflective materials! I'm past here in Ray Tracing in a weekend: raytracing.github.io/books/RayTracingInOneWeekend.html#metal/ascenewithmetalspheres
Since my last update, I implemented:
• An abstract class for materials
◦ Lambertian and metal materials
• A data structure to store material hit records, similar to the book's hit record for geometry
• Material support on the geometry that has been implemented so farWhoops, I posted that one before I started a real session and how hakkuun won't recognize the session. Apparently, if I repost it, I can add it to the session?
It doesn't appear very well through my camera for some reason, but I have lots more done! I'm past this point in the book now: raytracing.github.io/books/RayTracingInOneWeekend.html#diffusematerials/usinggammacorrectionforaccuratecolorintensity
This includes:
• An abstract class for objects that can be hit
• An arbitrary number of objects in the scene
• Multiple samples per pixel (although it's only at 10 in this picture, so it's quite grainy), including ray direction randomization in a disc pattern rather than the book's square, allowing for better antialiasing
• Gamma correction (Not sure how much this matters with the low accuracy of this screen, but still nice)
• Ray reflections with a limit
• Proper Lambertian reflections
Here's where I'm currently at with the project (at raytracing.github.io/books/RayTracingInOneWeekend.html#addingasphere/creatingourfirstraytracedimage). I'm typing all of the code on the NSpire, and while the ABCD keyboard is tricky to work with, at least it has a clipboard. I can't really share real code or commits (since there's no easy way to get it off the calculator), but here's a screenshot of a render. I'm translating all the C++ to the Nspire's Python dialect, which thankfully supports classes and operator overloading (although it's some weird mix of Python 2 and Python 3).
I've implemented:
• A 3D vector class, with operator overloading and all that, which is also used for points and colors
• A pixel rendering system using the built-in tidraw library
• Final image positioning logic (which is surprisingly hard to get right since non-integer rectangle widths are rounded in tidraw and therefore some values will give vertical or horizontal lines)
• Ray-sending logic
• Split code into multiple files (which was also surprisingly hard because you need to install each file as a Python library for some reason?)
• A ray class and some custom utilities on that to reduce how much code I need to write
• Sphere-ray hit detection
I haven't followed the manual exactly in order, mainly because I've done a lot of raytracing stuff before without it. Notably, I'm straying from their method of "Normal always faces the ray" because I find it easier to code and think about "Normal always faces outward"; this means I need to change some hit and reflection logic.
The render is set to a very low resolution right now, and it will be especially slow once I add multiple samples per pixel. I want to add an ETA so I can run some renders overnight, but I'm not quite to that point yet.