This is a comparison of C# and F# implementations of programming a simple neural network library I wrote for use in a side project.
A neural net is essentially a calculator that takes one or more numerical inputs and computes one or more numerical outputs.
Neural networks can fulfill a wide range of functions, but where they excel is finding an optimal output given various inputs.
The inputs are held in an input layer which connects to another layer — either the output layer or a hidden layer. Each layer, including the input layer consists of one or more neurons. Each neuron is connected to every neuron in the next layer and given a positive or negative weight indicating how important that connection is.
Individual neurons compute their values by calculating performing a summarizing function on all inputs from the prior layer with each input multiplied by the weight of the neuron connection. This then feeds in as an input into the next layer until they arrive in the output layer.
The output layer is what the caller of the neural net will use to evaluate the result of the network. This could represent anything from whether or not to buy a piece of stock, to the attractiveness of a move in a game, to what the dominant color in an image is or how happy a face appears.
Neural nets achieve these calculations via their inter-connected nature allowing flexibility to represent innovative solutions to problems, but neural nets are hard to interpret just by reading them.
Neural Networks typically have either a back propagation mechanism for training or are trained by some other factor such as a Genetic Algorithm, but both are beyond the scope of this article.
A Neuron summarizes and stores a value from other inputs.
In the C# Implementation, there’s a lot of boiler-plate code for maintaining fields and properties as well as connecting to other nodes and layers. The core evaluation logic occurs in the
Evaluate method and is fairly minimal, but supported by the connections established in the supporting methods.
By contrast, the F# implementation is minimal and offers some brief property storage and some simple
One of the things I like about this is that there isn’t a lot of meaningless syntax, spacing, or irrelevant logic. The downside of this is that the functional syntax can be harder to read while scanning code.
Layers are just collections of neurons in the same tier. The layer code is used for managing inter-connections between nodes in different layers.
NeuralNetLayer is honestly fairly boring. It acts as glue between the different nodes, but the implementation takes 100 lines of code to do that.
The F# version is shorter which uses
Seq (sequence) methods to delegate responsibilities to individual Neurons.
The Neural Net ties everything together into one wrapper. It arranges layers, exposes the inputs and outputs, and offers a way for callers to configure the network into a pre-determined arrangement.
C# Neural Net
Keeping to form, the C# implementation does some basic iteration and enumeration, but has a pronounced amount of extra space devoted only to syntax.
F# Neural Net
The F# version is the largest F# class, but it’s logic is still fairly concise with small, focused methods.
While the F# syntax is more concise, it should be noted that this is an example that is almost ideal for a functional language. This is a key example of a component that could be used by C# code in other projects.
If you were looking to add F# to a project, I’d recommend starting with a small isolated slice of your application that other areas depend on for calculations or other sorts of transformation logic.
I personally feel that Functional Programming, or at least core concepts from those languages, can benefit software quality significantly, so this is an idea worth exploring.
Where can I find this code?
All code in these examples is hosted on GitHub at https://github.com/IntegerMan/MattEland.AI
If you’re curious about MattEland.AI, it is available as a NuGet package at https://www.nuget.org/packages/MattEland.AI.Neural/