Since joining Flipdish I have been working in the Ecosystems and Payments team, responsible for client subscriptions, entitlements and payouts, client reporting and 3rd party integrations.
The company is moving towards a subscription based model and as a result we have needed to create a subscriptions and entitlements system. Interfacing with Stripe, the system implements an Event Driven Architecture in TypeScript and is deployed in AWS leveraging serverless offerings such as Lambda, DynamoDB and Event Bridge. Downstream services subscribe to our broadcasts of entitlement changes, allowing them to grant access or paywall features.
Related to this, I've been heavily involved with our self-serve initiative that should allow potential clients to signup and onboard themselves. I built a self-serve signup flow, a micro frontend that shows potential clients the available packages in their region and ultimately ends with them entering payment information and creating their account.
I have contributed to a range of our integrations, including order ingest (Order with Google), 3rd party delivery (Uber Direct, Glovo) and 3rd party point of sales systems (Pixel Point, Lightspeed, Micros Simphony), all deployed as C# container apps in Azure. I also worked on the Customer Feedback App, our first premium extension, which allows our clients to gather feedback from end users about the food and service they provided.
After IHS Markit merged with S&P Global I formed a new subteam with another developer to create a self-service environment management tool in the SaaS platform. Using this tool, tenants on the platform could create, manage and delete environments for each software service they used.
I developed the backend functionality, a C# CQRS API and microservice following a Domain Driven Design architecture and Event Sourcing pattern. A database per tenant architecture was used and Entity Framework migrations managed schemas.
I also developed the first iteration of the micro frontend using React, which allowed users to perform CRUD operations on environments. As we were the first team to start building micro frontends I also wrote a shell for this microsite to sit within, which managed higher level concerns such as authentication and routing.
Working in the Platform Services team, I was responsible for developing a cloud platform that would house all of the company's SaaS applications. The outcome of the team was to provide a set of core microservices and infrastructure that could be used by product teams to deploy and integrate their applications.
One of the main features I contributed to was multi-tenant access, which allowed a user to belong to multiple tenants, selecting one at login time. This involved rearchitecting login functionality, ensuring a tenant could be selected and exposing this information in JWTs. I also worked on the frontend implementation of this which was powered by Razor.
I joined the newly formed Dev Tooling Apps team, which was responsible for making devops related tools requested by other software teams to improve productivity. We produced what became known as the Dev Tooling App, which consisted of a C# backend and an Angular frontend, and was used daily by engineers across many teams.
I worked on many areas of the app. One such feature I developed was aggregating and displaying transient environment information, showing the deployed microservices, build history, CI/CD pipeline statuses and an integration with Grafana to view logs and traces. This screen also included a single-click environment tear down button, which would destroy all the cloud (AWS) resources related to the selected environment. I was also involved in writing a DORA metrics page, which pulled data from Jira and GitLab to provide a breakdown of the metrics per team.
I architected and developed a Microsoft Teams bot that serviced commands sent by developers, such as listing and destroying environments. The bot also became a mechanism of sending notifications to developers based on their user preferences. Service to bot communication was achieved using MassTransit over RabbitMQ.
The Dev Tooling App was deployed in its own EKS cluster and it's environment provisioned by Terraform.
A 9-week internship in which I joined one of the Enterprise Data Management (EDM) software teams, tasked with extending and upgrading the web interface side of EDM. The work mainly involved translating JavaScript features written in an old framework into Angular TypeScript code. I also extended some of the C# APIs.
In addition to this we were given the time to work on our own project, where I developed a build monitor that displayed the status of CI/CD builds and pipelines, in the form of an Angular SPA. This integrated with TeamCity to pull build information.
A year-long industrial placement in the Defence Mission Systems (DMS) division of Thales. I was part of the Command and Control software team working on maintaining and upgrading the Mine Counter Measure Management System (MCUBE), a master control system for mine hunting naval ships.
The main area of MCUBE I was responsible for was the navigation and autopilot subsystems. This involved working closely with domain experts to ensure the safety critical navigation algorithms were implemented correctly. I travelled to Germany to carry out integration testing between the navigation systems I wrote and the rudder-propulsion system used by a particular client.
I also spent a large portion of the placement working on a messaging system, responsible for constructing and decoding standardised messages used to plan missions. I trained and handed over this work to an apprentice before the end of my placement.
A 5-week solo project in which I developed an app for the trampoline park, targeting both iOS and Android devices. Through the app users could log into the park's website, browse through ticket pricing and book sessions. As the main demographic of the park's customers were young children and teenagers, I decided to develop and embed a small arcade game into the app.
Since joining Flipdish I have been working in the Ecosystems and Payments team, responsible for client subscriptions, entitlements and payouts, client reporting and 3rd party integrations.
The company is moving towards a subscription based model and as a result we have needed to create a subscriptions and entitlements system. Interfacing with Stripe, the system implements an Event Driven Architecture in TypeScript and is deployed in AWS leveraging serverless offerings such as Lambda, DynamoDB and Event Bridge. Downstream services subscribe to our broadcasts of entitlement changes, allowing them to grant access or paywall features.
Related to this, I've been heavily involved with our self-serve initiative that should allow potential clients to signup and onboard themselves. I built a self-serve signup flow, a micro frontend that shows potential clients the available packages in their region and ultimately ends with them entering payment information and creating their account.
I have contributed to a range of our integrations, including order ingest (Order with Google), 3rd party delivery (Uber Direct, Glovo) and 3rd party point of sales systems (Pixel Point, Lightspeed, Micros Simphony), all deployed as C# container apps in Azure. I also worked on the Customer Feedback App, our first premium extension, which allows our clients to gather feedback from end users about the food and service they provided.
After IHS Markit merged with S&P Global I formed a new subteam with another developer to create a self-service environment management tool in the SaaS platform. Using this tool, tenants on the platform could create, manage and delete environments for each software service they used.
I developed the backend functionality, a C# CQRS API and microservice following a Domain Driven Design architecture and Event Sourcing pattern. A database per tenant architecture was used and Entity Framework migrations managed schemas.
I also developed the first iteration of the micro frontend using React, which allowed users to perform CRUD operations on environments. As we were the first team to start building micro frontends I also wrote a shell for this microsite to sit within, which managed higher level concerns such as authentication and routing.
Working in the Platform Services team, I was responsible for developing a cloud platform that would house all of the company's SaaS applications. The outcome of the team was to provide a set of core microservices and infrastructure that could be used by product teams to deploy and integrate their applications.
One of the main features I contributed to was multi-tenant access, which allowed a user to belong to multiple tenants, selecting one at login time. This involved rearchitecting login functionality, ensuring a tenant could be selected and exposing this information in JWTs. I also worked on the frontend implementation of this which was powered by Razor.
I joined the newly formed Dev Tooling Apps team, which was responsible for making devops related tools requested by other software teams to improve productivity. We produced what became known as the Dev Tooling App, which consisted of a C# backend and an Angular frontend, and was used daily by engineers across many teams.
I worked on many areas of the app. One such feature I developed was aggregating and displaying transient environment information, showing the deployed microservices, build history, CI/CD pipeline statuses and an integration with Grafana to view logs and traces. This screen also included a single-click environment tear down button, which would destroy all the cloud (AWS) resources related to the selected environment. I was also involved in writing a DORA metrics page, which pulled data from Jira and GitLab to provide a breakdown of the metrics per team.
I architected and developed a Microsoft Teams bot that serviced commands sent by developers, such as listing and destroying environments. The bot also became a mechanism of sending notifications to developers based on their user preferences. Service to bot communication was achieved using MassTransit over RabbitMQ.
The Dev Tooling App was deployed in its own EKS cluster and it's environment provisioned by Terraform.
A 9-week internship in which I joined one of the Enterprise Data Management (EDM) software teams, tasked with extending and upgrading the web interface side of EDM. The work mainly involved translating JavaScript features written in an old framework into Angular TypeScript code. I also extended some of the C# APIs.
In addition to this we were given the time to work on our own project, where I developed a build monitor that displayed the status of CI/CD builds and pipelines, in the form of an Angular SPA. This integrated with TeamCity to pull build information.
A year-long industrial placement in the Defence Mission Systems (DMS) division of Thales. I was part of the Command and Control software team working on maintaining and upgrading the Mine Counter Measure Management System (MCUBE), a master control system for mine hunting naval ships.
The main area of MCUBE I was responsible for was the navigation and autopilot subsystems. This involved working closely with domain experts to ensure the safety critical navigation algorithms were implemented correctly. I travelled to Germany to carry out integration testing between the navigation systems I wrote and the rudder-propulsion system used by a particular client.
I also spent a large portion of the placement working on a messaging system, responsible for constructing and decoding standardised messages used to plan missions. I trained and handed over this work to an apprentice before the end of my placement.
A 5-week solo project in which I developed an app for the trampoline park, targeting both iOS and Android devices. Through the app users could log into the park's website, browse through ticket pricing and book sessions. As the main demographic of the park's customers were young children and teenagers, I decided to develop and embed a small arcade game into the app.
After learning that Spotify had audio analysis data available via their API, I came up with the idea of turning that data into light, with the help of Philips Hue smart bulbs. I created an Angular frontend and a collection of RESTful API microservices that together combine to form a visualiser for the currently playing Spotify song.
Using the Angular SPA, users can login to both their Spotify account and local Philips Hue bridge. After doing so, a list of lights connected to their bridge is displayed, and the user can select which lights they wish to include in the visualisation. Additionally, for each light they can select the visualisation type; what features of a song the light reacts to and what colours to use. After clicking the start button, the selected lights will react to the currently playing song on the user's Spotify account. If the song is paused, then the visualisation also pauses.
The visualisation logic is housed its own microservice, with a data stream (TCP socket) established between it and the frontend upon starting a visualisation session. With this architecture, multiple visualisation sessions by different users in any location will be able to take place.
Using the Twitter API, I gathered tweets posted over a 12-month period that contained keywords relating to Tesla stock. Each tweet was sent through a natural language sentiment binary classifier (Naive Bayes) to categorise them as either positive or negative. The percentage of tweets that were positive for each day in the 12-month period formed a time series.
I trained two long short-term memory (LSTM) neural networks to predict the net direction of the stock price movement from one day to the next (a binary classification problem; either up or down). The difference between the two neural networks were the inputs they received; one receive the normalised historical price data time series as input and the other received both the historical price data and Twitter sentiment time series as input. The architecture and hyperparameters of both neural networks were varied to examine the effect they had. The networks were trained on 10 months worth of data, and tested on the one remaining month.
The dissertation explains the motivation behind such a project by comparing it to related academic literature. The accuracies of predictions made by each hyperparameter permutation of both neural networks are presented, and comparisons made.
I created a Python library that can be used to backtest a binary trading technical strategy on historical price data provided by the user. Using the library, a set of rules defining when to buy and sell can be created programmatically. A simulation is then run using these rules on the historical price data, after which the total profit / loss is calculated, taking into account any commission payable to the broker on profitable trades.
To define a strategy, the user must select which technical indicators they wish to use (e.g. RSI, MACD, Bollinger Bands etc) and triggers / signals that will cause a buy or sell (e.g. RSI indicates oversold, price has left the Bollinger Band range etc). The parameters used to setup the indicators are fully customisable, and rules of when to buy and sell can be a complex combination of individual indicator signals. Users can implement their own custom indicators and signals should they need to, by implementing base classes.
As part of my master's year I worked on a group project in a team of six with the brief of making York a smart city within the next five years, in a way that must benefit local businesses. The idea we developed was to implement a dynamic congestion charge, driven by a range of real-time data sets. Many studies of city centres around the world have shown that dynamic congestion charges tend to increase footfall, which would benefit the heavily tourist and retail based city centre economy.
We developed a prototype system which consisted of a Vue.js frontend and Python backend. The frontend was aimed at the road users of York, and displayed the congestion charge information visually on a map. Users could pre-purchase tickets which were typically cheaper than on-demand prices. The backend process read real-time data sets such as footfall, traffic counts and emissions levels, and used these to set a charge for each area of York. In our report we outlined future work, including the development of a stacked AI model that would predict future congestion charges (by predicting the future values of footfall, traffic counts etc).
I was the designated technical lead on the project and outlined the architecture of the backend. Along with the other backend engineers, I helped write the algorithms that computed the congestion charges. In addition, I took ownership of the API that allowed the frontend and backend to communicate.
I implemented and assigned a feedforward neural network (MLP) with random weights to each individual in a population of agents acting in a 2D plane. The outputs of the neural networks control the movement of the agent, one to control the forward speed and the other to determine the change in rotation. The aim of the agents is to navigate an obstacle course without colliding with any objects; the further they get, the larger their fitness. Individuals have five sensors that detect the distance to the closest object in different directions. These sensor readings form the input to the neural networks.
After all individuals in a population collide with an obstacle, genetic selection, crossover and mutation occur. Selection is tournament based, with fitter individuals more likely to be selected. Crossover is done on a per weight basis, with an equal chance for each parent that their corresponding weighting will be chosen. Mutations are also applied on a per weight basis, and randomly change a weighting's value with a given probability.