• # Random Forest Regression Pt. 2 Visualizing the Data • # Random Forest Regression Pt. 1 Algorithms, Importing, Exploring and Preprocessing the Data

This is the first post in a series which considers Regression using Random Forests to predict the price of the stock of the Royal Bank of Canada (ticker RY). The full technology stack includes Python, Pandas, NumPy, Matplotlib/Plotly, and Scikit-Learn.

Firstly, we discuss the algorithms of Decision Trees and Random Forests. Next, the data is imported from Yahoo! Finance with demonstrations for local CSV files as well as sourcing via the pandas_datareader. Afterwards, preliminary explorations are done with Pandas and its DataFrame. Finally, the data is preprocessed in preparation for visualisation and modeling. • # Plotly: Getting Started and First Impressions

Visualisation packages are a critical component in any data scientist’s toolkit. They help in understanding the data and in finding patterns and outliers that are not immediately obvious from tabular data. They are also integral in evaluating the performance of learning algorithms. This is why it can be beneficial to try new offerings like Plotly and consider if incorporating them into a workflow would be beneficial. • # Univariate Linear Regression with AMZN and Scikit-Learn

In this post, we explore univariate Linear Regression with Amazon stock (AMZN ticker) data using the Python data science ecosystem. The libraries used include Pandas, NumPy, Matplotlib and Scikit-Learn.

We start with a brief introduction to univariate linear regression and how it works. The data is imported, explored, and preprocessed using Pandas and Matplotlib. The model is then fitted with the data using both a train/test split and cross-validation with Scikit-Learn. The results for both scenarios are then discussed and compared. • # Forecasting Stock Prices and Generating Buy Sell Signals

This is the first project I did with the Python data science stack. It is in the form of a Jupyter Notebook hosted on GitHub which can be found here. It covers a range of concepts and techniques including tools, data sources, data exploration and visualization, handling missing data, domain specific considerations and modeling. 