Add the following code to your project's shard.yml under:
dependencies
to use in production
- OR -
development_dependencies
to use in development
Simple Matrix is a lightweight matrix operations shard.
sum
, dot
, etc. with three (instead of two) arguments: source, target matrices and a result matrix like a bufferSimpleMatrix
is mostly looks like part of a functional language in Crystal
syntaxThe module is specifically designed to have increased performance reserves. It can be used when designing the interaction of layers within neural network engines.
Please read the notes on operations below.
Add the dependency to your shard.yml
:
dependencies:
simple_matrix:
github: fruktorum/simple_matrix
Run shards install
require "simple_matrix"
For all calculations described below please pay attention for the:
Matrices types must match
Almost all operations can be produced with 2D arrays instead of matrices:
matrix1.dot matrix2, result_matrix
# same as
matrix1.dot matrix2.buffer, result_matrix.buffer
# same as
dot_array = Array.new( matrix1.width ){ Array.new( 5 ){ rand } }
result_array = Array.new( matrix1.height ){ Array.new( dot_array.first.size ){ 0 } }
matrix1.dot dot_array, result_array
With buffers operations (arrays instead of matrices) they types must match
# height = 4, width = 3 (filled by 0)
matrix = SimpleMatrix( UInt32 ).new 4, 3
# type = UInt8, height = 4, width = 3, values = 99
matrix = SimpleMatrix( UInt8 ).new 4, 3, 99
# Custom values:
# 0 1 2
# 3 4 5
# 6 7 8
# 9 10 11
width, height = 3, 4
matrix = SimpleMatrix( UInt16 ).new( height, width ){ |y, x| ( y * width + x ).to_u16 }
# Matrix has public method to access internal buffer:
matrix.buffer # => Array( Array( UInt16 ) )
To initialize identity (in example: 5x5) matrix (zero-filled matrix which main diagonal filled by 1):
matrix = SimpleMatrix( UInt8 ).identity 5
┌ 1 0 0 0 0 ┐
│ 0 1 0 0 0 │
│ 0 0 1 0 0 │
│ 0 0 0 1 0 │
└ 0 0 0 0 1 ┘
puts matrix # prints to stdout in pretty form
For the below assume that matrices are declared as m1
, m2
, etc. Result matrix is declared as r
.
m1.dot m2, r
m1.dot m2.buffer, r.buffer
Notes:
Index out of bounds (IndexError)
.r
) matrix must match the width of the second matrix.r
) matrix must match the height of the first matrix.m1.mul m2, r
m1.mul m2.buffer, r.buffer
Notes:
m1
matrix are less than m2
matrix, only these dimensions will be usedr
matrix are greater than m1
matrix, the "tail" will not be usedIndex out of bounds (IndexError)
m1.sum m2, r
m1.sum m2.buffer, r.buffer
Notes:
m1
matrix are less than m2
matrix, only these dimensions will be usedr
matrix are greater than m1
matrix, the "tail" will not be usedIndex out of bounds (IndexError)
m.transpose r
m.transpose r.buffer
Notes:
m
(source) matrix must match the height of the r
(result) matrixm
(source) matrix must match the height of the r
(result) matrixm.convolve k, r, padding
m.convolve k.buffer, r.buffer, padding
Notes:
r
should be calculated taking into account matrix m
dimensions and kernel k
dimensions. Convolution example can be found in convolution specs.This is similar to applying the activation function to each neurons.
m.apply r do |m_matrix_value, m_y, m_x|
# do something...
end
m.apply r.buffer do |m_matrix_value, m_y, m_x|
# do something...
end
Notes:
Please feel free to make an PR!
Please be sure that engine is still have the desired performance stats: no mallocs + best benchmark ips.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)