This blog was created to build a habit of critically reading AI research papers and brainstorming ideas. I’m hoping that the action of posting my reflections online will spur me on to produce better analyses and incite discussion in this ever-evolving field.

The aim of the blog is to explain technical concepts used in AI as intuitively as possible, using physical analogies and first principles derivation when required. Coming from an engineering science (with focus on mechanical) background myself, I find it easier to reason about neural networks that way- seeing it as electronic circuits of information, backprogpogation as a form of adder circuit feedback, etc. While analogies may oversimplify concepts and hide the underlying math, the hope is that these memory tricks aid in getting the gist of things and knowledge retention. I will probably rely on these notes myself.

To readers: if you have any concerns or discussion points to share, feel free to contact me or raise an issue in this repository on Github.

Updated:

Comments