An introduction to Nash Equilibrium and Counterfactual Regret Minimization (CFR) with Kuhn Poker
A Jupyter notebook comparing using Tensorboard for analytics vs Altair visualizations
This is a list of practices I adopted along the way that I think are worth sharing.
I coded this Transformer from scratch for learning. This is based on The Annotated Transformer by Harvard NLP, which uses PyTorch.
It is tested on a toy problem instead of NLP data to make things simpler.
This library helps you organize machine learning experiments. It maintains TensorBoard summaries, checkpoints, produce pretty console outputs, and also adds a header with progress of experiments to python source files. It also has tools to plot custom charts based on TensorBoard summaries.
I coded a DQN agent to play Atari. It is a stand alone implementation. I went through the Open AI DQN so it's very similar to it. I coded this also in a literate fashion similar to my previous PPO implementation.
I implemented a reinforcement learning agent using PPO to play Atari Breakout. It is a standalone implementation. Originally using Tensorflow and later re-written using PyTorch.
I coded in a literate form with all the mathematical formula's etc, embedded in comments so that it can be used as a tutorial if any one wants to - and of course as a reference for my self.
I started working on a bunch of helper classes to use TensorFlow on Jupyter notebooks.
They provide nice diagrams and mathematical formulas as outputs based on the TensorFlow operations. It helps you understand the code better, and also acts as an inline help when coding.
This is a very simple Generative Adversarial Network build with deeplearn.js on ObservableHQ.
I coded this as an experiment to try out ObservableHQ, because I've loved almost all the projects by its creators @jashkenas and @mbostock.
I implemented a Long short-term memory (LSTM) module on numpy from scratch. This is for learning purposes. The network is trained with stochastic gradient descent with a batch size of 1 using AdaGrad algorithm (with momentum).
We are considering moving out codebase from CoffeeScript to TypeScript.
I updated Wallapatta editor with a spellchecker.
We did a re-write of the nearby.lk data model library, and we decided to open source the core of it. It supports JSON or YAML data files, and parses them based on a specification (like a schema).
In Wallaptta we model pagination as a cost minimization problem. That is, we try to find where to place page breaks so that there is no overflow and the cost is minimised. If the cost of adding a page break at a given point is know this can be easily solved with dynamic programming.
We want to share some node.js performance tips we found while working on Forestpin; specifically how to optimize runInContext, setting heap limits, memory usage with string slices, and joining strings.
I added full width blocks, and scripting to Wallapatta. Full width blocks are useful when adding large images, and we use embedded scripts to create diagrams.
We found that chrome (and therefor node.js) continues to keep the parent string in memory even if you discard it after taking a slice of it.
This is probably a optimization to make slicing efficient, and to reduce memory footprint - assuming you are keeping a reference to parent anyway.
I came across jsblock yesterday on Hacker News. They had a nice performance comparison test case set comparing jsblocks with Angular and React. It made me curious to see how the small library we use (Weya.coffee) compares in performance with these.
This is a new project I'm working on. It helps parse data with different structures, like reports from legacy systems.
I just started moving my blog from Svbtle to a static blog generator that I created. It is based on Wallapatta.
This is a tutorial on using shared memory among node.js processes.
It took Forestpin almost 2 years to get its first big customer. Making the first sale is always hard. It was a tough ride; talking to a lot of potential customers, changing product strategy from time to time, taking up classes on pitching and of course a lot of programming. We learned a lot during the process.
Which background is better? Most text editors and document readers have light backgrounds. But there's a lot of programmers who use dark backgrounds. And there are a some analytics and dashboard applications with dark backgrounds.
SEO is dead - at least much different from what it was known to be. But there are a plenty of consultants who market SEO, as if it is something that is hard to get right. Many organizations fall for it.
We figured that we can get a much higher frame rate by using webkit matrix3d transformations than using standard css properties, for animations
Weya.coffee is a lightweight library with no dependencies to generate DOM elements. We developed it to replace Coffeecup as a client side template engine. Because of its simplicity and performance, we are also using Weya to replace DOM manipulation of d3.js in data visualizations.
We have been using a lot of tools and libraries in our software, and have replaced a number of them with our code. Libraries makes it easy to get things done, and to ship early. But from my experience, having a third-party library or a tool dominate a core part of your software is not a good idea.
nearby.lk stopped advertising on Facebook to get Facebook likes, because we felt that it was a giant fruitless scheme of making Facebook rich. Most of the likes on Facebook are useless, they are basically random clicks, which adds no value to anybody, and you need to pay Facebook for that. By the way, this may not be the case with advertising for Clicks to Website, Website Conversions, etc. - I don't have experience with those.
We developed a a small library called fp.js as a wrapper for d3.js dom creation code. It helps you have much more readable, cleaner code.
Underline doesn't take away space in a table or a list of data and by varying the length of the underline you can help readers scan much faster and get an idea of the data and its distribution without having to read each number.
I started working on Sweet.js about a month ago. It is inspired by Backbone.js. Sweet.js supports HTML5 states, so that you don't have to go through work arounds like these. Sweet.js is not a MVC framework, but it has a views similar to Backbone.js, which supports inheritance without affecting events and initializations of super classes. And it's written in Coffeescript.
nearby.lk moved the servers from Google App engine to Amazon EC2 a couple of months back, and the backend is now built with nodejs with mongodb as the database.
We are trying to have C like macros in CoffeeScript. Main motivator is to improve efficiency while keeping the code clean and maintanable.
We are releasing Forestpin Lite, with a lot of improvements to our previous Forestpin Lite version released at the 24th Fraud Conference in June. The new version is packaged as a Google Chrome offline application and therefore runs on Windows, Mac and Linux platforms. Chethiya Abeysinghe was behind Forestpin Lite.
We released the new version of Forestpin Enterprise last week. The new version is a complete rewrite of both the backend and UI. The backend was rewritten to be faster and to introduce a bunch of new features and analytics. The user interface was redesigned to be much more user friendly and also with focus on mobile devices such as tablets.
Using this technique you can pass file descriptors between processes using sendmsg() and recvmsg() functions using UNIX Domain Protocol. Any descriptor can be passed using this method not just a file descriptor.
This example is a simple server which accepts connections and echos whatever data sent to the server. This example also demonstrates the use of epoll, which is efficient than poll. In epoll unlike poll all events that need to be monitored are not passed everytime the wait call is made.
Epoll uses event registration where events to be watched can be added, modified or removed. This makes it efficient when there are a large number of events to be watched.