• Mentoring for Sequence Models with deeplearning.ai


    I was invited to be a volunteer mentor on the Sequence Models course which is a part of the deeplearning.ai Deep Learning Specialization on Coursera. This is a course associated with Stanford University. The course covers Recurrent Neural Networks for Natural Language Processing. I got the invitation by email a few weeks after I completed the 5 courses in the Specialization in May 2018. I did this as a follow-up to the ever-popular Machine Learning course by the same instructor.

    Read More
  • A Django REST API


    Django is a scalable and open-source Python web framework for building custom web applications. It was built to be fast and flexible on a variant of the MVC pattern called Model-View-Template. It has many features built in like an admin dashboard, user authentication, and form validation.

    Read More
  • Up and Running with React


    React is a popular and in-demand front-end JavaScript web framework created by Facebook. It creates very responsive (meaning quick) web applications which do not require a full page reload to update the data. Thus, it can give the impression of using a desktop application rather than one based in a browser. These web apps can be either single or multi-page. React’s nearest competitor is Angular which is backed by Google.

    Read More
  • Random Forest Regression Pt. 4
    Training using One Feature with Grid Search and Randomised Grid Search


    This is Pt. 4 in the series covering Random Forest Regression to predict the price of RY stock. It follows from Pt. 3 on Feature Engineering. In this post, training is done using the estimators and tools provided by Scikit-Learn.

    We begin with a discussion of the theoretical concepts needed to undergo this process. These include hyperparameters, Grid Search, and pickling. Then, the models are trained using one feature with both Grid Search and Randomised Grid Search. The hypothesis that Randomised Grid Search is generally the...

    Read More
  • Random Forest Regression Pt. 1
    Algorithms, Importing, Exploring and Preprocessing the Data


    This is the first post in a series which considers Regression using Random Forests to predict the price of the stock of the Royal Bank of Canada (ticker RY). The full technology stack includes Python, Pandas, NumPy, Matplotlib/Plotly, and Scikit-Learn.

    Firstly, we discuss the algorithms of Decision Trees and Random Forests. Next, the data is imported from Yahoo! Finance with demonstrations for local CSV files as well as sourcing via the pandas_datareader. Afterwards, preliminary explorations are done with Pandas and its DataFrame. Finally, the data is preprocessed in preparation for visualisation and modeling.

    Read More
  • Plotly: Getting Started and First Impressions


    Visualisation packages are a critical component in any data scientist’s toolkit. They help in understanding the data and in finding patterns and outliers that are not immediately obvious from tabular data. They are also integral in evaluating the performance of learning algorithms. This is why it can be beneficial to try new offerings like Plotly and consider if incorporating them into a workflow would be beneficial.

    Read More
  • Univariate Linear Regression with AMZN and Scikit-Learn


    In this post, we explore univariate Linear Regression with Amazon stock (AMZN ticker) data using the Python data science ecosystem. The libraries used include Pandas, NumPy, Matplotlib and Scikit-Learn.

    We start with a brief introduction to univariate linear regression and how it works. The data is imported, explored, and preprocessed using Pandas and Matplotlib. The model is then fitted with the data using both a train/test split and cross-validation with Scikit-Learn. The results for both scenarios are then discussed and compared.

    Read More
  • Forecasting Stock Prices and Generating Buy Sell Signals


    This is the first project I did with the Python data science stack. It is in the form of a Jupyter Notebook hosted on GitHub which can be found here. It covers a range of concepts and techniques including tools, data sources, data exploration and visualization, handling missing data, domain specific considerations and modeling.

    Read More