Create a my Python Flask App

Hi in this article we are going to discuss how to create an app that will respond to a given request. It is available here

This is basically a webapp written using python Flask Framework. I will test integration of several cloud features in a given webapp (Upload pictures / resize it and visualize it, login functionality, Web scrapping etc..)

CSS will be ugly ( meaning I copy-pasted a lot ! ). Lastly every time I add a major feature I will upload it in a container registry and call it from the cloud platform command line.

The Version 1 of the WebApp will have the following characteristics

HOSTING :

  • Single VM in Azure

FEATURES

  • Pages : Presentation of me / Contact / Login with local user defined locally on app for the moment
    • Services page :
      • Upload / list uploaded pictures
  • IP localization ( from file today, will be a separate service itself in the future)

A screen shot of the result :

scrapping with python and collecting pictures

For one of my project I needed to do web-scrapping and collect images from websites.

Here is a code I use to collect it :

import requests
from bs4 import BeautifulSoup
from os.path import basename
import os
import sys
import urllib
import urllib2
import urlparse
import argparse
import json
import config
import random
import base64
import datetime
import time
import string
from azure.storage import CloudStorageAccount, AccessPolicy
from azure.storage.blob import BlockBlobService, PageBlobService, AppendBlobService
from azure.storage.models import CorsRule, Logging, Metrics, RetentionPolicy, ResourceTypes, AccountPermissions
from azure.storage.blob.models import BlobBlock, ContainerPermissions, ContentSettings
#from azure.storage.blob import BlobService
from azure.storage import *

#from azure.storage.blob.blobservice import BlobService

CURRENT_DIR = os.getcwd()
STORING_DIRECTORY_NAME = "stroage_scrapped_images"
STORING_DIRECTORY = CURRENT_DIR+"/"+STORING_DIRECTORY_NAME

if not os.path.exists(STORING_DIRECTORY):
os.makedirs(STORING_DIRECTORY)

def randomword(length):
letters = string.ascii_lowercase
return ''.join(random.choice(letters) for i in range(length))

startdate = time.clock()

metadata_loaded = {'Owner': 'ToBeAddedSoon', 'Date_Of_Upload': startdate, 'VAR_2': 'VAL_VAR_2','VAR_3': 'VAL_VAR_3','VAR_4': 'VAL_VAR_4'}

with open("credentials.json", 'r') as f:
data = json.loads(f.read())
StoAcc_var_name = data["storagacc"]["Accountname"]
StoAcc_var_key = data["storagacc"]["AccountKey"]
StoAcc_var_container = data["storagacc"]["Container"]
#print StoAcc_var_name, StoAcc_var_key, StoAcc_var_container


def copy_azure_files(source_url,destination_object,destination_container):
blob_service = BlockBlobService(account_name=StoAcc_var_name, account_key=StoAcc_var_key)
blob_service.copy_blob(destination_container, destination_object, source_url)

block_blob_service = BlockBlobService(account_name=StoAcc_var_name, account_key=StoAcc_var_key)

def upload_func(container,blobname,filename):
start = time.clock()
block_blob_service.create_blob_from_path(
container,
blobname,
filename)
elapsed = time.clock()
elapsed = elapsed - start
print "*** DEBUG *** Time spent uploading API " , filename , " is : " , elapsed , " in Bucket/container : " , container

#URL_TARGET = "https://mouradcloud.westeurope.cloudapp.azure.com/blog/blog/category/food/"
URL_TARGET = "https://www.cdiscount.com/search/10/telephone.html"
base_url = URL_TARGET
out_folder = '/tmp'
r = requests.get(URL_TARGET)
data = r.text
soup = BeautifulSoup(data, "lxml")


for link in soup.find_all('img'):
src = link
image_url = link.get("src")
while image_url is not None :
if 'http' in image_url:
blocks = []
if image_url.endswith(('.png', '.jpg', '.jpeg')):
print " ->>>>>>>>>>>>>> THIS IS AN IMAGE ... PROCESSING "
file_name_downloaded = basename(image_url)
file_name_path_local = STORING_DIRECTORY+"/"+file_name_downloaded
with open(file_name_path_local, "wb") as f:
f.write(requests.get(image_url).content)
filename_in_clouddir="uploads"+"/"+file_name_downloaded
#upload_func(StoAcc_var_container,filename_in_clouddir,file_name_path_local)
copy_azure_files(image_url,filename_in_clouddir,StoAcc_var_container)


break
else :
print " ->>>>>>>>>>>>>> THIS NOT AN IMAGE ... SKIPPING "
break
else :
print " ->>>>>>>>>>>>>> THIS IS A LOCAL IMAGE ... SKIPPING "
break
continue

The credential files look likes this :

{
"DC_LOCATION": "XXXXXXXXXXXXXXXXXXXXX", 
"GROUP_NAME": "XXXXXXXXXXXXXXXXXXXXX", 
"storagacc": {
"Accountname": "XXXXXXXXXXXXXXXXXXXXX", 
"AccountKey": "XXXXXXXXXXXXXXXXXXXXX", 
"Container": "XXXXXXXXXXXXXXXXXXXXX", 
"AZURE_CLIENT_SECRET": "XXXXXXXXXXXXXXXXXXXXX=", 
"AZURE_SUBSCRIPTION_ID": "XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX"
}
}

Application modernization : Marketing event with blob storage static page

In this article, we are going to discuss how to use blob storage as static page for cheap while keeping security and integrity.

First of all you have your Database of customer and an application that can call the DB and receive request to update the DB.

If you do not have your application, you can use mine which is located here : https://github.com/MourIdri/flaskgattling .

1 – You will have to create your own Data Base and add you credentials details in the credentials.jon files.

2 – You create virtual machine and and start the script : sh_bootstrap_gattling_server_ub16.sh

This script will prepare the VM for you and start the application that will now be available at : http://<YourPublicIP>:877/ 

You can use the script for your docker file if you want to make it even cheaper !

In my case, I use a container which is easy to deploy and My application is already containerized !

Now that the application is running, you can try the client @ sh_bootstrap_gattling_client_ub16.sh This client is a Gatling that will test the performance of your server and will provide some indicators like the type of request ( Read or Write the DB), the time it tooks and the Status code the server sent.

Now every element of the backend is working, let’s use the front end.

There is no more need to create a frond end for marketing event, blob storage is enough !

You just need to create a container and put the pages that will be involved in it on azure blob storage. Make sure you setup access policy as read only. You can copy past the content of this https://github.com/MourIdri/flaskgattling/tree/master/frontmarket 

And then Adjust the policies using Storage explorer or Azure portal.

The demo is running @ https://rgcloudmouradgeneraleuro.blob.core.windows.net/mouraddevcont/frontmarket/index.html

So now the workflow is this one :

1 – User reach the first blob

The HTML and the pictures ( also CSS) are stored in the container :

2 – The user register and the application does the following

The page can contain a JS script of a PHP script to do operation like Post for example. In my case this is a very small and simple use-case were it does a post to the flask application server :

3 – the Database is updated and Flask API uses the redirect feature to redirect to the last blob.

Optionally put a back up in the Blob archive feature with a time stamp or snapshot.

Build Azure HUB and SPOKE using PFSENSE NVA, UDR, Vnet Peering and VPN on local router.

In this thread we are going to discuss how to setup a Hub and Spoke architecture on azure using connection thru VPN, PFsense Router and other tools like azure routes. Here is the target topology :

in a preview article, we saw how to create a VPN sessions (here)

We will review part of it here…

1 – 1 First lest build the shared environment on azure ( RG, Vnet and storage for logs etc..) :

#HUB_SHARED_CORE_INFRA
az group create --resource-group RG_CORE_INFRA --location westeurope
az network vnet create --resource-group RG_CORE_INFRA --location westeurope --name vnet_core --address-prefix 10.1.1.0/24
az storage account create --location westeurope --name moustorcoreinfra --resource-group RG_CORE_INFRA --sku Standard_LRS

1 – 2 Let’s split the Vnet :

#HUB_SHARED_CORE_INFRA
az network vnet subnet create --address-prefix 10.1.1.0/28 --name default --resource-group RG_CORE_INFRA --vnet-name vnet_core 
az network vnet subnet create --address-prefix 10.1.1.16/28 --name GatewaySubnet --resource-group RG_CORE_INFRA --vnet-name vnet_core 
az network vnet subnet create --address-prefix 10.1.1.32/28 --name subnet_DMZ --resource-group RG_CORE_INFRA --vnet-name vnet_core 
az network vnet subnet create --address-prefix 10.1.1.48/28--name subnet_core --resource-group RG_CORE_INFRA --vnet-name vnet_core

1 – 3 Build the NVA using a PFSENSE Image I created in the last step of this thread ( default login/psswd is admin/pfsense) :

AdminPassword="M@nP@ssw@rd!"
az network nic create --resource-group RG_CORE_INFRA --name pf-sense-nva-2-nic-2 --vnet-name vnet_core --subnet subnet_DMZ --ip-forwarding true --private-ip-address 10.1.1.37
az network nic create --resource-group RG_CORE_INFRA --name pf-sense-nva-2-nic-1 --vnet-name vnet_core --subnet subnet_core --ip-forwarding true --private-ip-address 10.1.1.53
az vm create --resource-group RG_CORE_INFRA --name pf-sense-nva-2 --admin-password $AdminPassword --admin-username demo --nics pf-sense-nva-2-nic-1 pf-sense-nva-2-nic-2 --image PFSENSE-IMAGE --size Standard_DS2_v2 --os-disk-size-gb 32 --no-wait
az vm boot-diagnostics enable --resource-group RG_CORE_INFRA --name pf-sense-nva-2 --storage https://moustorcoreinfra.blob.core.windows.net 

1 – 4 Create a VM in the Core area to configure and maintain :

AdminPassword="M@nP@ssw@rd!"
az network nic create --resource-group RG_CORE_INFRA --name win2k-16-maint-nic-1 --vnet-name vnet_core --subnet subnet_core --ip-forwarding true --private-ip-address 10.1.1.52
az vm create --resource-group RG_CORE_INFRA --name win2k-16-maint --image win2016datacenter --admin-username demo --admin-password $AdminPassword --nics win2k-16-maint-nic-1 --size Standard_DS2_v2 
az vm boot-diagnostics enable --resource-group RG_CORE_INFRA --name win2k-16-maint --storage https://moustorcoreinfra.blob.core.windows.net

1 – 5 Create the VPN Gateway and tunnel ( it takes something like 20 mins in the background ) :

#CREATE VPN GATEWAY ET VPN
az network public-ip create --resource-group RG_CORE_INFRA --name PuIPVPNGW --dns-name publicipvnpconnexionaddress --allocation-method Dynamic
az network local-gateway create --gateway-ip-address 88.88.88.88 --name Site2 --resource-group RG_CORE_INFRA --local-address-prefixes 192.168.0.0/16
az network vnet-gateway create --resource-group RG_CORE_INFRA --name SPOCKVPNGW --public-ip-address PuIPVPNGW --vnet vnet_core --gateway-type Vpn --vpn-type RouteBased --sku Basic --no-wait
az network vpn-connection create --resource-group RG_CORE_INFRA --name hometoAzure --vnet-gateway1 SPOCKVPNGW -l westeurope --shared-key PasswordToBeChanged --local-gateway2 Site2

1 – 6 On the onprem router, create the VPN too (it will work for ubiquiti and generally speaking for any strongswann based vpn like VyOS) :

configure
set vpn ipsec auto-firewall-nat-exclude enable
set vpn ipsec ike-group FOO0 key-exchange ikev2
set vpn ipsec ike-group FOO0 lifetime 28800
set vpn ipsec ike-group FOO0 proposal 1 dh-group 2
set vpn ipsec ike-group FOO0 proposal 1 encryption aes256
set vpn ipsec ike-group FOO0 proposal 1 hash sha1
set vpn ipsec esp-group FOO0 lifetime 27000
set vpn ipsec esp-group FOO0 pfs disable
set vpn ipsec esp-group FOO0 proposal 1 encryption aes256
set vpn ipsec esp-group FOO0 proposal 1 hash sha1
set vpn ipsec site-to-site peer 7.7.7.7 authentication mode pre-shared-secret
set vpn ipsec site-to-site peer 7.7.7.7 authentication pre-shared-secret <PASSW>
set vpn ipsec site-to-site peer 7.7.7.7 connection-type respond
set vpn ipsec site-to-site peer 7.7.7.7 description ipsec
set vpn ipsec site-to-site peer 7.7.7.7 local-address 192.168.1.1
set vpn ipsec site-to-site peer 7.7.7.7 ike-group FOO0
set vpn ipsec site-to-site peer 7.7.7.7 vti bind vti0
set vpn ipsec site-to-site peer 7.7.7.7 vti esp-group FOO0
set interfaces vti vti0
set firewall options mss-clamp interface-type vti
set firewall options mss-clamp mss 1350
set protocols static interface-route 10.1.0.0/16 next-hop-interface vti0
commit ; save

2 – 1 Create Shared services for spoke Production :

#SPOKE_PROD
az group create --resource-group SPOKE_PROD --location westeurope
az network vnet create --resource-group SPOKE_PROD --location westeurope --name vnet-spoke-prod --address-prefix 10.1.2.0/24
az storage account create --location westeurope --name moustorprod --resource-group RG_CORE_INFRA --sku Standard_LRS

2 – 2 Create the Production environment ( let’s assume this is a basic Apache server for now), we create an NSG ( optional), Load Balancer, Vnet subnet and 2 VMs loadbalanced

#PROD_FRONT
#CREATE_NSG_OPTIONNAL_SINCE_ FIREWALLING_CAN_BE_MANAGE_FROM_NVA
az network nsg create --resource-group SPOKE_PROD --name NGS-generic-linux-N-tier-1
az network nsg rule create --resource-group SPOKE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-22_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 22 --access allow --priority 1000
az network nsg rule create --resource-group SPOKE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-80_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 80 --access allow --priority 1001
az network nsg rule create --resource-group SPOKE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-8080_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 8080 --access allow --priority 1002
az network nsg rule create --resource-group SPOKE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-3306_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 3306 --access allow --priority 1003
#CREATE_SUBNET_AND_LOAD_BALANCER
az network vnet subnet create --address-prefix 10.1.2.0/28 --name vnet-spoke-prod-subnet-tier-1 --resource-group SPOKE_PROD --vnet-name vnet-spoke-prod --network-security-group NGS-generic-linux-N-tier-1
az network lb create --resource-group SPOKE_PROD --name load-balancer-front-end-web --private-ip-address 10.1.2.4 --subnet vnet-spoke-prod-subnet-tier-1 --vnet-name vnet-spoke-prod --backend-pool-name demo-front-from-tier-1-to-backend-pool
az network lb probe create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name health-prob-1-22 --protocol tcp --port 22
az network lb probe create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name health-prob-1-80 --protocol tcp --port 80
az network lb probe create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name health-prob-1-8080 --protocol tcp --port 8080
az network lb probe create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name health-prob-1-3306 --protocol tcp --port 3306
az network lb rule create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name load-balancer-rule-1-22 --protocol tcp --frontend-port 22 --backend-port 22 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-22
az network lb rule create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name load-balancer-rule-1-80 --protocol tcp --frontend-port 80 --backend-port 80 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-80
az network lb rule create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name load-balancer-rule-1-8080 --protocol tcp --frontend-port 8080 --backend-port 8080 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-8080
az network lb rule create --resource-group SPOKE_PROD --lb-name load-balancer-front-end-web --name load-balancer-rule-1-3306 --protocol tcp --frontend-port 3306 --backend-port 3306 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-3306
#CREATE_AVAILIBILITY_SET
az vm availability-set create --resource-group SPOKE_PROD --name Availability-Set-1 --platform-fault-domain-count 2 --platform-update-domain-count 2
#CREATE_2_VMs_LOAD_BALANCED
az network nic create --resource-group SPOKE_PROD --name ub-16-tier-1-vm-1-nic-1 --vnet-name vnet-spoke-prod --subnet vnet-spoke-prod-subnet-tier-1 --private-ip-address 10.1.2.5 --lb-name load-balancer-front-end-web --lb-address-pools demo-front-from-tier-1-to-backend-pool
az vm create --resource-group SPOKE_PROD --name ub-16-tier-1-vm-1 --admin-password $AdminPassword --admin-username demo --availability-set Availability-Set-1 --nics ub-16-tier-1-vm-1-nic-1 --image UbuntuLTS --size Standard_DS1_v2 --os-disk-size-gb 32 
az vm boot-diagnostics enable --resource-group SPOKE_PROD --name ub-16-tier-1-vm-1 --storage https://moustorprod.blob.core.windows.net
az network nic create --resource-group SPOKE_PROD --name ub-16-tier-1-vm-2-nic-1 --vnet-name vnet-spoke-prod --subnet vnet-spoke-prod-subnet-tier-1 --private-ip-address 10.1.2.6 --lb-name load-balancer-front-end-web --lb-address-pools demo-front-from-tier-1-to-backend-pool
az vm create --resource-group SPOKE_PROD --name ub-16-tier-1-vm-2 --admin-password $AdminPassword --admin-username demo --availability-set Availability-Set-1 --nics ub-16-tier-1-vm-2-nic-1 --image UbuntuLTS --size Standard_DS1_v2 --os-disk-size-gb 32

3 – 1 Let’s build the Pre Production spoke :

#SPOKE_PRE_PROD
az group create --resource-group SPOKE_PRE_PROD --location westeurope
az network vnet create --resource-group SPOKE_PRE_PROD --location westeurope --name vnet-spoke-pre-prod --address-prefix 10.1.3.0/24
az storage account create --location westeurope --name moustorpreprod --resource-group RG_CORE_INFRA --sku Standard_LRS

3 – 2 Create the Pre – Production environment ( let’s assume this is a basic Apache server for now), we create an NSG ( optional), Load Balancer, Vnet subnet and 2 VMs loadbalanced

#PRE_PROD_FRONT
#CREATE_NSG_OPTIONNAL_SINCE_ FIREWALLING_CAN_BE_MANAGE_FROM_NVA
az network nsg create --resource-group SPOKE_PRE_PROD --name NGS-generic-linux-N-tier-1
az network nsg rule create --resource-group SPOKE_PRE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-22_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 22 --access allow --priority 1000
az network nsg rule create --resource-group SPOKE_PRE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-80_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 80 --access allow --priority 1001
az network nsg rule create --resource-group SPOKE_PRE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-8080_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 8080 --access allow --priority 1002
az network nsg rule create --resource-group SPOKE_PRE_PROD --nsg-name NGS-generic-linux-N-tier-1 --name NGS-generic-linux-N-tier-1-rule-3306_inbound --protocol tcp --direction inbound --source-address-prefix '*' --source-port-range '*' --destination-address-prefix '*' --destination-port-range 3306 --access allow --priority 1003
#CREATE_SUBNET_AND_LOAD_BALANCER
az network vnet subnet create --address-prefix 10.1.3.0/28 --name vnet-spoke-pre-prod-subnet-tier-1 --resource-group SPOKE_PRE_PROD --vnet-name vnet-spoke-pre-prod --network-security-group NGS-generic-linux-N-tier-1
az network lb create --resource-group SPOKE_PRE_PROD --name load-balancer-front-end-web-pre --private-ip-address 10.1.3.4 --subnet vnet-spoke-pre-prod-subnet-tier-1 --vnet-name vnet-spoke-pre-prod --backend-pool-name demo-front-from-tier-1-to-backend-pool
az network lb probe create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name health-prob-1-22 --protocol tcp --port 22
az network lb probe create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name health-prob-1-80 --protocol tcp --port 80
az network lb probe create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name health-prob-1-8080 --protocol tcp --port 8080
az network lb probe create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name health-prob-1-3306 --protocol tcp --port 3306
az network lb rule create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name load-balancer-rule-1-22 --protocol tcp --frontend-port 22 --backend-port 22 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-22
az network lb rule create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name load-balancer-rule-1-80 --protocol tcp --frontend-port 80 --backend-port 80 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-80
az network lb rule create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name load-balancer-rule-1-8080 --protocol tcp --frontend-port 8080 --backend-port 8080 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-8080
az network lb rule create --resource-group SPOKE_PRE_PROD --lb-name load-balancer-front-end-web-pre --name load-balancer-rule-1-3306 --protocol tcp --frontend-port 3306 --backend-port 3306 --backend-pool-name demo-front-from-tier-1-to-backend-pool --probe-name health-prob-1-3306
#CREATE_AVAILIBILITY_SET
az vm availability-set create --resource-group SPOKE_PRE_PROD --name Availability-Set-1 --platform-fault-domain-count 2 --platform-update-domain-count 2
#CREATE_1_VMs_LOAD_BALANCED
az network nic create --resource-group SPOKE_PRE_PROD --name ub-16-pre-tier-1-vm-1-nic-1 --vnet-name vnet-spoke-pre-prod --subnet vnet-spoke-pre-prod-subnet-tier-1 --lb-name load-balancer-front-end-web-pre --lb-address-pools demo-front-from-tier-1-to-backend-pool
az vm create --resource-group SPOKE_PRE_PROD --name ub-16-pre-tier-1-vm-1 --admin-password $AdminPassword --admin-username demo --availability-set Availability-Set-1 --nics ub-16-pre-tier-1-vm-1-nic-1 --image UbuntuLTS --size Standard_DS1_v2 --os-disk-size-gb 32 --no-wait

4 – 1 Now create the Vnet Peering :

#VNET_PEERING_CORE_TO_PROD
az network vnet peering create -g RG_CORE_INFRA -n PEERING_HUB_TO_PROD --vnet-name vnet_core --remote-vnet-id vnet-spoke-prod --allow-vnet-access
az network vnet peering create -g SPOKE_PROD -n PEERING_PROD_TO_HUB --vnet-name vnet-spoke-prod --remote-vnet-id vnet_core --allow-vnet-access
#VNET_PEERING_CORE_TO_PRE_PROD
az network vnet peering create -g RG_CORE_INFRA -n PEERING_HUB_TO_PRE_PROD --vnet-name vnet_core --remote-vnet-id vnet-spoke-pre-prod --allow-vnet-access
az network vnet peering create -g SPOKE_PRE_PROD -n PEERING_PRE_PROD_TO_HUB --vnet-name vnet-spoke-pre-prod --remote-vnet-id vnet_core --allow-vnet-access

4 – 2 Create the UDR and Route tables :

#CREATE_ROUTE_TABLES
#CREATE_ROUTE_TABLE_HUB_TO_ONPREM
az network route-table create --resource-group RG_CORE_INFRA --name ROUTE_TABLE_HUB_TO_ONPREM 
# Creates routes to force traffic from GatewaySubnet to 192.168.0.0/16 to go thru 10.1.1.37 ( INterface WAN of the Router )
az network route-table route create --name ROUTE_HUB_TO_ONPREM --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_HUB_TO_ONPREM --address-prefix 192.168.0.0/16 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.37
az network route-table route create --name ROUTE_HUB_TO_CORE --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_HUB_TO_ONPREM --address-prefix 10.1.1.48/28 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.37
az network route-table route create --name ROUTE_HUB_TO_PRE_PROD --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_HUB_TO_ONPREM --address-prefix 10.1.3.0/24 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.37
az network route-table route create --name ROUTE_HUB_TO_PROD --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_HUB_TO_ONPREM --address-prefix 10.1.2.0/24 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.37
# and update vnet so Traffic from GatewaySubnet will transit to the virtual appliance IP 10.1.1.36 which is in DMZ to reach 192.168.0.0/16 ( this is redundant traffic but in case you use your own gateway this is better ) 
az network vnet subnet update --vnet-name vnet_core --name GatewaySubnet --resource-group RG_CORE_INFRA --route-table ROUTE_TABLE_HUB_TO_ONPREM
#CREATE_ROUTE_TABLE_SPOKE_PRE_PROD
az network route-table create --resource-group RG_CORE_INFRA --name ROUTE_TABLE_SPOKE_PRE_PROD
# Creates routes to force traffic from vnet-spoke-pre-prod-subnet-tier-1 to 192.168.0.0/16 and 10.1.2.0/24 to go thru 10.1.1.53 ( INterface LAN of the Router )
az network route-table route create --name SPOKE_PRE_PROD_TO_ONPREM --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PRE_PROD --address-prefix 192.168.0.0/16 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
az network route-table route create --name SPOKE_PRE_PROD_TO_PROD --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PRE_PROD --address-prefix 10.1.2.0/24 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
az network route-table route create --name ROUTE_HUB_TO_DMZ --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PRE_PROD --address-prefix 10.1.1.32/28 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
az network route-table route create --name ROUTE_HUB_TO_GWVPNSUBNET --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PRE_PROD --address-prefix 10.1.1.16/28 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
# and update vnet so Traffic from vnet-spoke-pre-prod-subnet-tier-1 will transit to the virtual appliance IP 10.1.1.57 to reach 192.168.0.0/16 and 10.1.2.0/24
az network vnet subnet update --vnet-name vnet-spoke-pre-prod --name vnet-spoke-pre-prod-subnet-tier-1 --resource-group RG_CORE_INFRA --route-table ROUTE_TABLE_HUB_TO_ONPREM
#CREATE_ROUTE_TABLE_SPOKE_PROD
az network route-table create --resource-group RG_CORE_INFRA --name ROUTE_TABLE_SPOKE_PROD
# Creates routes to force traffic from vnet-spoke-prod-subnet-tier-1 to 192.168.0.0/16 and 10.1.3.0/24 to go thru 10.1.1.53 ( INterface LAN of the Router )
az network route-table route create --name ROUTE_SPOKE_PROD_TO_ONPREM --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PROD --address-prefix 192.168.0.0/16 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
az network route-table route create --name SPOKE_PROD_TO_PRE_PROD --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PROD --address-prefix 10.1.3.0/24 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
az network route-table route create --name ROUTE_HUB_TO_DMZ --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PROD --address-prefix 10.1.1.32/28 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
az network route-table route create --name ROUTE_HUB_TO_GWVPNSUBNET --resource-group RG_CORE_INFRA --route-table-name ROUTE_TABLE_SPOKE_PROD --address-prefix 10.1.1.16/28 --next-hop-type VirtualAppliance --next-hop-ip-address 10.1.1.53
# and update vnet so Traffic from vnet-spoke-pre-prod-subnet-tier-1 will transit to the virtual appliance IP 10.1.1.57 to reach 192.168.0.0/16 and 10.1.3.0/24
az network vnet subnet update --vnet-name vnet-spoke-prod --name vnet-spoke-prod-subnet-tier-1 --resource-group RG_CORE_INFRA --route-table ROUTE_TABLE_HUB_TO_ONPREM

Now the azure part is done… We will make the network flow go thru the DMZ to LAN and add some controle on it :

so now you go do a remote desktop to 10.1.1.52 which should the address of the Maintenance server and start configuring the pfsense which is accessible thru 10.1.1.53 and configure it like this :

Change the Name of the interface ( it is called WAN by default, call it DMZ) .

Also allow LAN traffic on same page :

Best Troubleshooting tool ever… PING ! so allow it ! ( on both interface) :

Create a route :

Apply changes

Add Gateway ( 10.1.1.33 is the 32+1 Ip that is provided as a GW by Azure )

Apply again …  :

Now modify roue to reflect new created gateway :

Enable Masquerade :

(Bad copy and paste… ) Still not working, because Port 80 is not allowed.. :

Allow Port 80 on both interface ( Very very bad copy-paste.. sorry it was late …)

Hurray !!  from my home raspberry test It works all the way thru the spoke !!

Customize VMs before using it in Azure – Pfsense Network appliance example

In this thread I am going to show / explain how to create a custom VHD and create a VM from it.

I am going to use the PFSENSE appliance that is very interesting to do routing with.

In the thread, I am going to go with spinning up a VM from a Blob only. Image is to be discussed in another thread in the future.

  • PART 1 – creation of the VM :

Specify a subnet ( use the private default )

At this point do not start the VM yet. We have to change the disk first and attached it to the VM :

A new disk has been created, you have to attach it to VM as below ( do not forget to click apply before doing “ok” :

Now you select “add new material” , we are going to add an other card NIC to have an external access.

  • PART 2 – Starting the VM and customize :

Just before the reboot question, remove eject the disk of the iso form the task bar

do not forget to check this option, otherwise you are going to have some troubles once into azure.

Now go to putty and enter the following :

pkg upgrade -y
ln -sf /usr/local/bin/perl /usr/bin/perl
pkg install bash git -y
pkg search python
pkg search setuptools
pkg install python27-2.7.14_1 py27-setuptools-36.5.0
ln -sf /usr/local/bin/python2.7 /usr/local/bin/python
git clone https://github.com/Azure/WALinuxAgent.git
cd WALinuxAgent
python setup.py install
ln -sf /usr/local/sbin/waagent /usr/local/bin/waagent
waagent -version

like this :

check the agent is running

disable useless feature ( for now)

disabling again…

Now your VM is almost ready, you have to go back to select DHCP on IPv4 for the LAN interface ( and none for the IPv6 ) :

Reboot the VM one last time :

Now shut down the VM and delete the SNAPs :

  • PART 3 – Upload to Blob storage :

Connect to azure using powershell in admin mode :

Connect-AzureRmAccount

EDIT : I forgot the storage account creation part, so I add it :

Copy Paste, the command below ( adapte it to fit your expectation and your names) :

$ResourceGroupName = "RG_CORE_INFRA"
$pfresourcegroup = "RG_CORE_INFRA"
$StorageAccountName = "rgmouraddemov0"
$vnetname = "VNET_CORE_INFRA"
$location = "West Europe"

New-AzureRmStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -Location $location -SkuName "Standard_LRS" -Kind "Storage"

Now upload optionally copy your Disk elsewhere to avoid misbehavior … and use powershell to copy it to blob storage :

$rgName = "RG_CORE_INFRA"
$urlOfUploadedImageVhd = "https://rgmouraddemov0.blob.core.windows.net/vhd/pfsenseappliancevhd.vhd"
Add-AzureRmVhd -ResourceGroupName $rgName -Destination $urlOfUploadedImageVhd  -LocalFilePath "C:\pfsense_appliance_demo_vhd.vhd"

You will get something like this ( it took me a while, so you better grab a cup of coffee… ) :

By the way, the VHD is available here if you do not want to go thru the tuto… :

https://rgmouraddemov0.blob.core.windows.net/vhd/pfsenseappliancevhdv1.vhd
  • PART 4 – Build VM from blob :

Now go to powershell and execute the following command .

This going to create a VM with 2 interfaces NICs with 2 public IP. One for management the other one is specific for Pfsense router and will be discussed in a future thread.

$ResourceGroupName = "RG_CORE_INFRA"
$pfresourcegroup = "RG_CORE_INFRA"
$StorageAccountName = "rgmouraddemov0"
$vnetname = "VNET_CORE_INFRA"
$location = "West Europe"
$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $ResourceGroupName
$backendSubnet = Get-AzureRMVirtualNetworkSubnetConfig -Name default -VirtualNetwork $vnet
$vmName="ROUTER-PFSENSE"
$DiskNameOfVm="pfsenseos"
$vmSize="Standard_DS2_v2"
$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $ResourceGroupName
$pubipwan = New-AzureRmPublicIpAddress -Name "PFPubIP-WAN" -ResourceGroupName $pfresourcegroup -Location $location -AllocationMethod Dynamic
$pubiplanadmin = New-AzureRmPublicIpAddress -Name "PFPubIP-LAN-ADMIN" -ResourceGroupName $pfresourcegroup -Location $location -AllocationMethod Dynamic
$nic1 = New-AzureRmNetworkInterface -Name "PFIntNIC-LAN" -ResourceGroupName $pfresourcegroup -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pubiplanadmin.Id
$nic2 = New-AzureRmNetworkInterface -Name "PFIntNIC-WAN" -ResourceGroupName $pfresourcegroup -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pubipwan.Id
$vm = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $DiskNameOfVm -VhdUri https://rgmouraddemov0.blob.core.windows.net/vhd/pfsenseappliancevhdv1.vhd -CreateOption attach -Linux -Caching ReadWrite
##$vm = Set-AzureRmVMOSDisk -VM $vm -Name $DiskNameOfVm -VhdUri https://rgmouraddemov0.blob.core.windows.net/vhd/pfsense_appl_vhd.vhd -CreateOption attach -Linux -Caching ReadWrite
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic1.Id
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic2.Id
Set-AzureRmVMBootDiagnostics -VM $vm -Enable -ResourceGroupName $ResourceGroupName -StorageAccountName $StorageAccountName
$vm.NetworkProfile.NetworkInterfaces.Item(0).Primary = $true
New-AzureRMVM -ResourceGroupName $pfresourcegroup -Location $location -VM $vm

This are a set of commands that I found very useful, created from a great fellow here and I adapted to my usage for this example .

Now wait a little bit (5 mins) and you just go the the portal to see you VM that has spun up :

And if I go to the public IP I just got from the VM creation. I can see my Pfsense appliance.

Next step : NSG to filter traffic and of course create a static route to organize my hub and spoke architecture. This is going to be discussed in a future thread.

  • PART 5 – Build Image from blob :

Now go to powershell and execute the following command .

This going to create an Image that can be used to build VMs.

$location = "west europe"
$imageName = "PFSENSE-IMAGE"
$osVhdUri = "https://rgmouraddemov0.blob.core.windows.net/vhd/pfsenseappliancevhd1.vhd"
$imageConfig = New-AzureRmImageConfig -Location $location
$imageConfig = Set-AzureRmImageOsDisk -Image $imageConfig -OsType linux -OsState Generalized -BlobUri $osVhdUri
$image = New-AzureRmImage -ImageName $imageName -ResourceGroupName $ResourceGroupName -Image $imageConfig
  • PART 6 – Build VM from Image :

This is a part that is to be completed. Because, when using the Portal, this is very friendly, and easey. However, when it comes to powershell, it is getting somehow difficult if you want to add 2 NICs. Meanwhile, using powershell is very easy if a single VM with normal parameters is needed.

Enough with powershell 🙂 Let’s us CLI now… :

Juste adapt the below commands :

AdminPassword="Mypassword11!" 
az network nic create --resource-group RG_CORE_INFRA --name pf-sense-nva-1-nic-1 --vnet-name vnet_core --subnet subnet_DMZ --ip-forwarding true --private-ip-address 10.1.1.37
az network nic create --resource-group RG_CORE_INFRA --name pf-sense-nva-1-nic-2 --vnet-name vnet_core --subnet subnet_core --ip-forwarding true --private-ip-address 10.1.1.53
az vm create --resource-group RG_CORE_INFRA --name pf-sense-nva-1 --admin-password $AdminPassword --admin-username demo --nics pf-sense-nva-1-nic-2 pf-sense-nva-1-nic-1 --image PFSENSE-IMAGE --size Standard_DS2_v2 --os-disk-size-gb 32 
az vm boot-diagnostics enable --resource-group RG_CORE_INFRA --name pf-sense-nva-1 --storage https://rgmouraddemov0.blob.core.windows.net



PYTHON – Create files with random text with it

Hi,

For some reason, you want sometime to generate files with random content inside it for a given test.

Here is an example of code to do this. You specify a list of size and the function will create those files plus generates a variable with the list of files in :

FILESIZE_LIST=[0.128,0.256,0.512,1,2,4]

def create_files (FILESIZE):

 global listofObjectstobeanalyzed
 listofObjectstobeanalyzed = [ ]
 extensions = ['.jpg','.png'] 
 for increment in FILESIZE:
    print "*** DEBUG *** filesize_%sMB.txt" % (str(increment))
    filename="filesize_%sMB.txt" % (str(increment))
    inc=Decimal(increment)
    with open(filename, 'w') as f:
       for i in range((inc*2**20)/512):
          f.write(os.urandom(512))
       f.close()
    onemoreObject=filename
    listofObjectstobeanalyzed.append(onemoreObject)
    print "*** DEBUG *** ", onemoreObject , " ADDED to FILE LIST "
 print "*** DEBUG *** FINAL LIST : " , listofObjectstobeanalyzed
 return listofObjectstobeanalyzed


create_files (FILESIZE_LIST):

PYTHON & AZURE BLOB STORAGE – PART 1 – Install and first upload

Hi,

In this new thread, we are going to play with Azure blob storage with Python. First Part will be installing python sdk and upload our first set of data with some metadata.

In order to install your python SDK, your have to do the following commands :

pip install azure

Now if you prefer to download it from GitHub, you should do the following :

git clone git://github.com/Azure/azure-sdk-for-python.git
cd azure-sdk-for-python
python setup.py install

Now you have installed the requirement, it is time to get into business.

Let us start by presenting how you have to create you script to get things done.

1 – Import the correct libraries.

2 – Define credentials, artifacts and targets.

3 – Define your python call.

4 – Start you function

Let’s start with the first one.

Be sure to start your script with the following :

import os
import sys
import time 
import azure
from azure.storage.blob import BlockBlobService
from azure.storage.blob import ContentSettings

Optionally, you can add some others. For the next examples, we will be doing tests like threading etc, so it might worth adding those packages :

import traceback
import threading 
import json 
from decimal import *
from datetime import datetime

Now It is time to define your credentials :

Accountname="rgcloudmouradgeneralpurp"
AccountKey="***********************NjfXoQk3luKV/UhKm*****TTc6JgzHdi5mO/x2V*******=="

You can add artifacts by creating a storage service , define the target container and create some metadatas ( it is a collection of Strings, can be a small json ) :

MyFIle_to_Upload="test.txt"
block_blob_service = BlockBlobService(account_name=Accountname, account_key=AccountKey)
container_source="mouradpubliccontainer"
metadata_loaded = {'Owner': 'foo', 'Dept': 'blah', 'Environement': 'Naboo','Customer': 'Jabbah','Project': 'StarWars'}

Create your python Call :

def upload_func(container,blobname,filename,MetaDataS):
   start = time.clock()
   block_blob_service.create_blob_from_path(
   container,
   blobname,
   filename,
   metadata=MetaDataS)
   elapsed = time.clock()
   elapsed = elapsed - start
   print "*** DEBUG *** Time spent uploading API " , filename , " is : " , elapsed , " in Bucket/container : " , container

In This call, we define “the container”, the “blobname”, which is the name that the blob will have once stored in Azure, The “filename” is the local file to be uploaded and the Metadatas.

Now can just go and start the function.

upload_func(container_source,MyFIle_to_Upload,MyFIle_to_Upload,metadata_loaded):

Here you go ! now check your container on your portal, your file is uploaded into it !

part 2 – Lift and Shift – Rapid IaaS deployment using CLI on Azure

In this topic, We will discuss the ability to easily deploy an infra using CLI ( aka Command Line Interface ). We will deploy a 3 tier app on azure.

Let’s imagine your a company and every time you want to create marketing events for a given situation. The cool thing will be to have a template to deploy every time you need it and just do some small adaptions ( change the background picture of the webpage for example ).

The development repository of the app ( front and back )  are stored in Github here and here.

Using Azure you have the possibility to test the CLI in your machine or directly in the web browser ( see below ) .

Click on the bash next to the bell :

then the Cli dashboard entry appears :

Or download and install your CLI in your laptop and do the test : This page deals with it .

In this thread we will focus on CLI from my machine.

Before doing the “copy-past” part of the demo, let’s discuss about my sample script.

First go to github and download my 2 bash scripts here .

In this rep, you will find 2 Script files.  Basically, you just copy past into your CLI to get the 3tier app running. However, due to some network latencies ( on the CLI machine) , I do not recommend to copy-past the entire script at once, but copy/past part of it.

IMPORTANT NOTES :

  1. In my script, I often used some customer infos ( like password, name of the RG, you will find my name in the script some times, just do a ctrl+F and replace my name with yours ).
  2. The script will apply the commands in a serial fashion, if you want to save some times, just open several window and copy-past the 3 tiers separately.
  3. For Front end :
    Based on ubuntu 14. User/Password is the script.
    After deploying every VM, a script file is applied to set up the server. The script file is downloaded from azure, but if you want to have your onw, just change the link https://rgcloudmouradgeneralpurp.blob.core.windows.net/exchangecontainermourad/sh_bootstrap_pu.sh with your own.
    Basically, the front-end is a php page were a customer is registering himself for an event.
  4. For back end :
    There is flask API running on a python app. This python app is taking the input from the frontend and fill a DB
  5. DB server :
    It is a single mysql instance. The script actually install a mysqlserver and load a dump from Azure blob storage, you can dump this table later if you want.

Every time a resource is deployed you will get a confirmation as a json format :

It takes a while, especially the VM deployment and the extension script ( the script that is installing apache, and download from GIT repo ) At the end of the 2 scripts, you will go in your resources on azure portal ( adapt it if you want you will have this infra )  :

Now let’s try the app.

In my script I asked the dns name of the IP load balancer to be : “demofrontweb”, you will have to add the region where the IP is located ( eastus for me ) and “cloudapp.azure.com” generic dns for azure embedded offer : http://demofrontweb.eastus.cloudapp.azure.com/ .

I register and submit :

For the purpose of this demo, the only way to connect to DB server is to ssh to a VM that will be used as a control VM with a public ssh access to internet with only port 22 open.

Here is the connection, DB selection and Table :

To confirm my request has been recorded, I go in the database server and search for my details :

Here is before registering my user :

Here is after :

Protect your VMs against bitcoins mining..

I was away from My VMs for a couples of hours for the weekends. When I came back, I found my VM on Azure Portal consuming a ridiculous amount of bandwidth ! See yourself  !

As you can see, It used 4TB in less than 2 days !! Still our friends slept a little bit….

After looking at netstat, I found an IP that was doing a lot of transactions and quite a few sessions opened.. I copy-pasted this IP in a localization tool, and Voila !!

Here we are ! a Chinese friend !

My first reaction was to De-activate the public IP from the Azure console, and I got this decrease on the network usage, however, the CPU was still high !

so I add a look at processes with a “top” command , And I found a strange process that is doing very fishy things. Its name is “.syslog”, like the syslog, but is actually something else… The syslog was well-behaving and the syslogs where at their places.

ls -l /proc/1351/exe
lrwxrwxrwx 1 root root 0 Mar 12 14:00 /proc/1351/exe -> /bin/.syslog

Bingo, the syslog is normaly not there ! So I did a Kill -9 and the CPU got down for good.

After some minutes to try to find and un-compile this .syslog , I gave up and just tried to start it again, then the CPU went crazy again :

but now the process was different The file was still hidden with a dot at its beginning. But now it name was a random set of chars…

After googling a little bit, I found that my VM was actually mining for Bitcoins !!

I ended up deleting this test vm…

=> Next time, I will activate Azure security features with agent logging !!

“Infra As Code” with Open Source LAMP App on Azure

Using the N-tier approach, I created a script that is building an N-tier application in less than 20 Mins. No Microsoft Legacy !!! Only opensource

The Github is here : https://github.com/MourIdri/HYBRID

Download the script called “hybrid_deploy.py” and start it on you favorite client

(I use Windows 10 with Ubuntu Embedded… )

The front end is a PHP based application. BackEnd is Python written App using Flask Framework and Restful capabilities. And the DataBase is a MySQL DB.

It should look like this ( first page ) :

Second Page :

Validation with API Call

As you can see, son goku is now registered to the event.