LAMP and Azure – Misconceptions vs Possibilities

A discussion of the Microsoft Platform (Windows, IIS, SQL Server and ASP.NET) vs LAMP (Linux, Apache, MySQL, and PHP) topic covers a large set of topics.

My intent is not to compare 1:1 but commenting on a scenario.

In many discussions, I realized many people have perception/misconceptions that, Azure is not really meant for traditional web-based applications built on the LAMP (Linux Apache MySQL PHP).

However, the truth is that you can deploy LAMP stack on Azure to rapidly build, deploy, and dynamically scale websites and web apps using IaaS (VM scale sets) and PaaS (Azure Web Apps)

So, Customers who want to – Upgrade web apps to the cloud for scalability, high availability and other cloud traits like global presence, and dynamically scale (up and down) websites in a cost-effective. You should consider Azure as you get Architectural choices for hosting websites to choose from a wide array of architectures (containers, VMs, PaaS services, Azure Functions, etc.) and languages (node.js, PHP, Java, etc.). Linux web apps, let us create node and Javascript websites that are fully managed.

Providers like, Bitnami provides images which are pre-configured, tested and optimized for Microsoft Azure and portable across platforms. Which provides quick and ready to use services.

For more information please feel free to visit @ https://azure.microsoft.com/en-in/overview/choose-azure-opensource/

AWS – 3-Tier Web Application Architecture

Wikipedia says – Three-tier architecture is a client-server software architecture pattern in which the user interface (presentation), functional process logic (business rules), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms/server.

AWS provides the reference architecture for “Web Application Hosting” with the description as Amazon Web Services, provides the reliable scalable, secure, and high-performance infrastructure required for web applications while enabling an elastic, scale out and scale down infrastructure to match IT costs in real time as customer traffic fluctuates.

However, the provided reference architecture is little high level in nature and probably people need more details considering AWS five pillars perspectives. Hence, I’m attempting a shot.

Proposed Reference Architecture is –

When architecting technology solutions, if we neglect the five pillars of Security, Reliability, Performance Efficiency, Cost Optimization, and Operational Excellence it can become challenging to build a system that delivers on your expectations and requirements. When you incorporate these pillars into your architecture, it will help you produce stable and efficient systems. This will allow you to focus on the other aspects of design, such as functional requirements. Considering these, proposed architecture is depicted in diagram below –

Description for Key components – with 5 pillar perspective

Security

Security of data in rest/transit is taken with the utmost priority. All the servers are completely isolated by design. One of the most important networking features AWS provides is resource isolation using Virtual Private Cloud (VPC). Where Security group acts as a virtual firewall for Instances in to control inbound and outbound VPC traffic and Network Access Lists (NACL) is another, optional layer of security for VPC that controls traffic between one or more subnets. Also, using secure protocols listeners and enabling SSL termination on the ELB to release the load on the backend instances.

Reliability

Amazon Web Services brings a lot of built-in features to address business continuity. Elastic Load Balancing (ELB) and multiple Availability Zones for Servers/Instances. ELB will effectively distribute load among EC2 servers, but also ensures that services will be unaffected if one data center becomes unavailable for some reason. Per RTO/RPO requirements, the solution could be deployed in two/more AWS regions from disaster/recovery standpoint (Active/Passive). Route 53 for traffic distribution/routing across regions

Performance Efficiency

The Performance Efficiency focuses on the efficient use of computing resources and maintaining that efficiency as demand changes and technologies evolve. AWS provides multiple types of EC2 instances – on demand, reserved (for a specific period) and spot instances (bid on unused instances) and allow for flexibility in the choice of size also. A solution should start with smaller on-demand instances and once we understand the level of workload, choose the right size and combine on-demand and reserved instances. The options are endless. In certain scenarios, such as when flash traffic is expected, the auto-scaling with cloud watch should be utilized for effective utilization of resources and the same should be monitored.

Cost Optimization

The Cost Optimization is a continual process of refinement and improvement of a system over its entire lifecycle. From the initial design of very first POC to the ongoing operation of production workloads. A solution should monitor the usage and accordingly auto adjust by using AWS “CloudWatch” and “Auto Scaling” features. Also, usage of PaaS services removes the operational burden of maintaining servers for tasks like sending an email or managing NoSQL DBs. As PaaS services operate at cloud scale, they can offer a lower cost per transaction or service. Replicating the environments using CloudFormation templates.

Operational Excellence

The Operational Excellence includes operational practices used to manage production. How planned changes are executed, responses to unexpected operational events. Change execution and responses should be automated, documented, tested, and regularly reviewed. In AWS, we can set up source control, a (CI/CD) pipeline and release management. Aggregate logs for centralized monitoring and alerts. Make sure alerts trigger automated responses, including notification and escalations.

File Storage and Functions – A files import story in Azure

The story goes like this – you have a set of files which should be imported into a solution hosted on Azure.

The idea is to cover the scenario technically – the key players are Azure File Storage, Azure Functions.

If you don’t know already then quick summary –

  1. Azure File storage
    It’s a service that offers file shares in the cloud using the standard Server Message Block (SMB) Protocol. With Azure File storage, you can migrate legacy applications that rely on file shares to Azure quickly and without costly rewrites. Applications running in Azure virtual machines or cloud services or from on-premises clients can mount a file share in the cloud, just as a desktop application mounts a typical SMB share. Any number of application components can then mount and access the File storage share simultaneously. Since a File storage share is a standard SMB file share, applications running in Azure can access data in the share via file system I/O APIs. For more details please refer here. [Reference: Azure docs]
  2. Azure Functions – It’s a service that offers a server-less compute service that enables you to run code-on-demand without having to explicitly provision or manage infrastructure. Use Azure Functions to run a script or piece of code in response to a variety of events. So, a solution for easily running small pieces of code, or “functions,” in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it. Functions can make development even more productive, and you can use your development language of choice, such as C#, F#, Node.js, Python or PHP. For more details please refer here. [Reference: Azure docs]

The Overall process –

  • Define the structure for Input files location – In file storage, defines a structure for Input file, Processed and Failed file by using ‘Share(s)’ and ‘Directory(s)/Files(s)’.
  • New file detection mechanism – check the presence of a new file(s) as per predefined schedule and add a message to a queue for further processing. Using a Function triggered by a timer.
  • Import the files/data into system – A Function which processes the input file(s) and ultimately imports the data.
  • Perform cleanup at Input files location – Mark files as processed, or move files to processed/failed directory for reference/tracking purpose.

Now, the devil is in the detail –

The Azure File service offers the four resources: the storage account, shares, directories, and files. The File service REST API provides a way to work with share, directory, and file resources via HTTP/HTTPS operations. So, instead of UNC/file-share/mapping, you need to use Azure Storage SDK which is a wrapper over Azure Storage REST API. This should avoid any UNC/mapping/related issues.