The mines package


A simulation of AI controlled minesweepers using neural networks and genetic algorithms. The minesweepers learn to sweep mines themselves and their movements are recorded in an SVG file.

[Skip to Readme]


Versions 0.1
Dependencies base, directory (>=, mtl (>=, random (>= [details]
License OtherLicense
Copyright Antti Salonen 2008
Author Antti Salonen
Stability Experimental
Category Game, AI
Home page
Uploaded Wed Aug 13 14:28:47 UTC 2008 by AnttiSalonen
Distributions NixOS:0.1
Downloads 371 total (5 in the last 30 days)
0 []
Status Docs not available [build log]
Last success reported on 2015-11-13 [all 6 reports]


Maintainer's Corner

For package maintainers and hackage trustees

Readme for mines

Readme for mines-0.1

Mines 0.1

Building with Cabal:

$ runhaskell Setup.hs configure --user --prefix=$HOME
$ runhaskell Setup.hs build
$ runhaskell Setup.hs install


This part isn't too difficult. Run the program (called mines) from wherever
you installed it and wait until you get some results. The output may give
some hints on how well the sweepers are doing. Every 10 generations an SVG
file is generated in ~/.mines (or Documents and Settings/user/Application 
Data/mines on Windows) that shows you how a minesweeper moved in the
previous generation.

It may take a few hundred generations before they actually get smart. When
they're smart enough for you, ctrl+c will exit.


To configure the program, you need to tweak the values in the beginning
of Mines.hs and recompile. Changing settings like the size of the map or
number of minesweepers makes the simulation run faster or slower and also
affects the speed the minesweepers learn. The current values work pretty well,
but feel free to experiment.

Neural networks:

The minesweepers have a 4-3-3-2 neural network, where the inputs are the 
direction of the minesweeper (x and y) and the vector to the nearest mine
(x and y). The outputs are the angle where to turn (relative to current
minesweeper direction) and the velocity for moving (which is not used
in the code in this release).

To use the neural networks in NN.hs, you only need the functions newNeuralNet
and updateNeuralNet. newNeuralNet takes a list of layers (such as [4,3,3,2])
as well as minimum and maximum values for weights (floats) as parameters,
updateNeuralNet takes a weighting function (such as sigmoid or (*1)), list
of weights as input and a neural net as a parameter and returns the list
of outputs.

Genetic algorithms:

To train the neural networks you need genetic algorithms that can be found
at GA.hs. You can use getWeights in NN.hs to get a list of weights that
are the genes for the genetic algorithms, functions crossOver, mutate, 
mixGenes and pickRoulette for choosing the parents and creating the children
and putWeights to apply the new weights to the new networks.


The types for weights in neural networks as well as for genes in genetic
algorithms are hard coded at the moment (Float). Making that application
dependent would be good.

More info:

For questions, you can reach me at I've written about
the program in my blog, you can find the text at
Feel free to improve the code.

The minesweepers were inspired by an article by Mat Buckland on neural
networks, available at

Licensed with MIT, see LICENSE for details. Copyright 2008 Antti Salonen.