yannis assael | the blog

  • Home
  • About
  • Categories
    • Android
    • Computing
    • iOS
    • Machine Learning
    • MacOSX
  • GitHub
  • Mobile Apps
  • yannisassael.com


Learning to Communicate with Deep Multi-Agent Reinforcement Learning

Written by iassael on 02/09/2016. Posted in machine learning

learning to communicate We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains.

Github: https://github.com/iassael/learning-to-communicate
arXiv: https://arxiv.org/abs/1605.06676

  • Continue Reading
  • No Comments

Batch-Normalized LSTM for Torch

Written by iassael on 16/04/2016. Posted in computing, machine learning

Recurrent Batch Normalization

Batch-Normalized LSTMs

Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville

http://arxiv.org/abs/1603.09025

Usage

Clone from: https://github.com/iassael/torch-bnlstm

local rnn = nn.LSTM(input_size, rnn_size, n, dropout, bn)

n = number of layers (1-N)

dropout = probability of dropping a neuron (0-1)

bn = batch normalization (true, false)

Example

https://github.com/iassael/char-rnn

Performance

Validation scores on char-rnn with default options

bnlstm_val_loss

 

  • Continue Reading
  • No Comments

OpenMP for Mac OS X El Capitan (Torch7)

Written by iassael on 14/03/2016. Posted in computing, machine learning, macosx

Having followed a plethora of guides to achieve proper OpenMP support on OS X, this is how it can be done:

  1. Install Homebrew
  2. run “brew update” (don’t skip that 🙂 )
  3. run “brew install clang-omp”
  4. add the following lines to ~/.profile
    export PATH=/usr/local/bin:$PATH

    # CLANG-OMP
    export CC=clang-omp
    export CXX=clang-omp++

    # Brew Libs
    export C_INCLUDE_PATH=/usr/local/include:$C_INCLUDE_PATH
    export CPLUS_INCLUDE_PATH=/usr/local/include:$CPLUS_INCLUDE_PATH
    export LIBRARY_PATH=/usr/local/lib:$LIBRARY_PATH
    export DYLD_LIBRARY_PATH=/usr/local/lib:$DYLD_LIBRARY_PATH

  • Continue Reading
  • No Comments

“MKL library not found” Torch7

Written by iassael on 01/03/2016. Posted in machine learning

Installing Torch with MKL support can be tricky. I had installed MKL (or Parallel Studio XE) and still Torch installation was not detecting MKL. After a long caffeinated night I found that the problem relies on the lib and include paths of CMAKE. So, given that you have already added that to your .bashrc (Ubuntu Linux) / .bash_profile (MacOSX):

. /opt/intel/bin/compilervars.sh intel64

Then all you need to do for Ubuntu is:

export CMAKE_INCLUDE_PATH=$CMAKE_INCLUDE_PATH:/opt/intel/compilers_and_libraries/linux/include:/opt/intel/mkl/include
export CMAKE_LIBRARY_PATH=$CMAKE_LIBRARY_PATH:/opt/intel/compilers_and_libraries/linux/lib/intel64:/opt/intel/mkl/lib/intel64

and for MACOSX:

export CMAKE_INCLUDE_PATH=$CMAKE_INCLUDE_PATH:/opt/intel/compilers_and_libraries/mac/include:/opt/intel/mkl/include
export CMAKE_LIBRARY_PATH=$CMAKE_LIBRARY_PATH:/opt/intel/compilers_and_libraries/mac/lib:/opt/intel/mkl/lib
  • Continue Reading
  • 1 Comment

Toch7 Intel MKL not found

Written by iassael on 04/06/2015. Posted in computing, machine learning

I am on Mac OS X Yosemite, and although I had installed Intel MKL properly, I was getting a mkl_intel_lp64 not found message when I was trying to install Torch7.

The solution was found by adding the following links to your .profile.

source /opt/intel/composer_xe_2015.3.187/bin/compilervars.sh intel64

and most importantly:

export CMAKE_INCLUDE_PATH=$CMAKE_INCLUDE_PATH:/opt/intel/compilers_and_libraries/mac/include:/opt/intel/mkl/include
export CMAKE_LIBRARY_PATH=$CMAKE_LIBRARY_PATH:/opt/intel/compilers_and_libraries/mac/lib:/opt/intel/mkl/lib

  • Continue Reading
  • No Comments

Component Analysis using Torch7

Written by iassael on 23/02/2015. Posted in machine learning

  • Screen Shot 2015-02-23 at 05.59.09Principal Component Analysis (PCA)
  • Whitened Principal Component Analysis (W-PCA)
  • Linear Discriminant Analysis (LDA)
  • Locality Preserving Projections (LPP)
  • Neighbourhood Preserving Projections (NPP)
  • Fast Independent Component Analysis (FastICA)

https://github.com/iassael/torch7-decomposition

  • Continue Reading
  • No Comments

Heaviside Step Function between 0 and 1 in Python (if else mathematical expression)

Written by iassael on 23/02/2015. Posted in computing, machine learning

Dirac_distribution_CDF.svgThe numpy.sign() function returns a result between -1 and 1 which was not useful in my case…

To limit the outcomes in the range of 0 and 1 you could still use the numpy.sign() function and write it as follows:

(0.5 * (np.sign(var) + 1))

Cheers

  • Continue Reading
  • No Comments