Best entry video Functional Programming in 40 Minutes • Russ Olsen • GOTO 2018
You can find the slide here.
Mindset We don’t need to put our knowledge aside when learning functional programming. It’s more of a refactoring. Functions in mathmatics It’s just maps from set to set. Functions donn’t compute somthing in math. It just is. A thing. Once the function was defined, the input-output correspondence never changes by other factors, which means, there is no side effects.
Let’s grasp the concept of decorator in Python PEP 318 with short snippets.
Step 1: Function as a parameter def f1(): print("Called f1.") def f2(f): f() f2(f1) # Called f1 f2 takes a function (object) as a parameter.
Step 2: Wapping function def f1(fun): def wrap(): print("Start wrap") fun() print("End wrap") return wrap def f(): print("In function f") f1(f)() ### python test.py #Start wrap #In function f #End wrap Step 3: Use decorator def f1(fun): def wrap(): print("Start wrap") fun() print("End wrap") return wrap @f1 def f(): print("In function f") f() ### python test.
I tried Serverless framework with Python+AWS Lambda.
Prerequirement Node.js and npm:
$ node -v v10.19.0 $ npm -v 7.5.2 Install I created my serverless account with Google SSO:
sudo npm install -g serverless Hello world Create a project:
$ serverless Serverless: No project detected. Do you want to create a new one? Yes Serverless: What do you want to make? AWS Python Serverless: What do you want to call this project?
Original source I followed the linke below.
https://hatemtayeb2.medium.com/hello-graphql-a-practical-guide-a2f7f9f70ab4
Install library pip -U install graphene Use it scheme.py
import graphene import json class Query(graphene.ObjectType): hello = graphene.String() def resolve_hello(self, info): return "world" schema = graphene.Schema(query=Query) result = schema.execute( ''' { hello } ''' ) data = dict(result.data.items()) print(json.dumps(data,indent=2)) Try the code.
$ python schema.py { "hello": "world" } GraphQL concepts Every GraphQL implementation needs a Query class that contains some fields, every field (not always) must have a resolver function that return data related to the field.
We can install TensorFlow via pip easily, but we should care a little bit more if you want to enable GPU.
Requirements https://www.tensorflow.org/install/gpu#software_requirements
#Here is how I installed my NVIDIA GPU environment.
Install Pre requirements sudo apt-get install libcupti-dev #already installed in my case echo 'export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc source ~/.bashrc Install cuDNN Download a compatible version from https://developer.nvidia.com/rdp/cudnn-download. tar -xzvf cudnn-10.2-linux-x64-v8.0.1.13.tgz sudo cp cuda/include/cudnn*.h /usr/local/cuda/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/include/cudnn*.
TensorBoard We can easily visualize our neural networks written by TensorFlow in a graph format with TensorBoard (it can more actually).
https://www.tensorflow.org/tensorboard/get_started
Install As of 2020/07/09, TensorBoard is installed when you install TensorFlow with pip.
pip install -U tensorboard <- it already installed when you install tensorflow with pip. it coflict and cause problem
Simple sampl code First, create a smiple model.
mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.
Official https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/extend/architecture.md
TensorFlow has master, client, worker components. You can imagine a distributed system, and it is correct. TensorFlow is designed to make a cluster.
Distributed TensorFlow And here is the official document about distributed TensorFlow with sample codes.
https://github.com/tensorflow/examples/blob/master/community/en/docs/deploy/distributed.md
Deprecated: the link was expired Another Sample Here is sample cluster code by IONOS (one of the biggest German ISP.)
https://www.ionos.de/community/server-cloud-infrastructure/tensorflow/einrichten-eines-verteilten-tensorflow-clusters-auf-cloud-servern/
You can see there is parameter servers and worker servers.
Intro - Official quickstart for beginners https://www.tensorflow.org/tutorials/quickstart/beginner
Import TensorFlow library and load official MNIST dataset.
import tensorflow as tf mnist = tf.keras.datasets.mnist Split MNIST dataset into training and dataset. and regularize (from 0 to 1).
(x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 The meaning of values is quoted below.
https://conx.readthedocs.io/en/latest/MNIST.html
The MNIST digits are grayscale images, with each pixel represented as a single intensity value in the range 0 (black) to 1 (white).
Basic information about dataframe df.info() #basic information about dataframe len(df.index) #rethrn the number of rows (data) df.count() #return the number of values which are non-NaN on each column df.head() df.tail() Count the data in a column In this example, the column is “Product”.
df["Product"].value_counts() unique values to series.
df["Product"].unique() # the type numpy.ndarray check distrivution in graph # Check the data distribution # The column is Score ax = df["Score"]value_counts().plot(kind='bar') fig = ax.
I need to automate AWS Route 53 operation with Ansible, and here is a note. (As Ansible always does, most useful informations are in the official document.)
Set up environment Install boto According to the Ansible official document, We need to install boto (AWS SDK for Python).
pip install -U boto Get AWS API keys and export it boto uses two keys in order to use AWS API under the hood.