Python : Shallow copy VS Deep copy

Presenting you a short article on difference between a shallow & a deep copy in python. So, let’s get started.

In python, when we assign objects like list, tuples, dict, etc to another object usually with a ‘ = ‘ sign, python creates copy’s by reference. That is, let’s say we have a list of list like this :

list1 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ]  ]

and we assign another list to this list like :
list2 = list1

then if we print list2 in python terminal we’ll get this :
list2 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ]  ]

Both list1 & list2 are pointing to same memory location, any change to any one them will result in changes visible in both objects, i.e both objects are pointing to same memory location.
If we change list1 like this :
list1[0][0] = ‘x’
list1.append( [ ‘ g ‘ ] )

then both list1 and list2 will be :

list1 = [ [ ‘ x ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] , [ ‘ g ‘ ] ]
list2 = [ [ ‘ x ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] , [ ‘ g’ ] ]

Now coming to Shallow copy, when two objects are copied via shallow copy, the child object of both parent object refers to same memory location but any further new changes in any of the copied object will be independent to each other.
Let’s understand this with a small example. Suppose we have this small code snippet :

import copy

list1 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ]  ]         # assigning a list
list2 = copy.copy(list1)       # shallow copy is done using copy function of copy module

list1.append ( [ ‘ g ‘ , ‘ h ‘ , ‘ i ‘ ] )   # appending another list to list1

print list1
list1 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] , [ ‘ g ‘ , ‘ h ‘ , ‘ i ‘ ] ]
list2 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] ]
notice, list2 remains unaffected, but if we make changes to child objects like :

list1[0][0] = ‘x’

then both list1 and list2 will get change :
list1 = [ [ ‘ x ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] , [ ‘ g ‘ , ‘ h ‘ , ‘ i ‘ ] ] 
list2 = [ [ ‘ x ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] ]

Now, Deep copy helps in creating completely isolated objects out of each other. If two objects are copied via Deep Copy then both parent & it’s child will be pointing to different memory location.
Example :

import copy

list1 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ]  ]         # assigning a list
list2 = deepcopy.copy(list1)       # deep copy is done using deepcopy function of copy module

list1.append ( [ ‘ g ‘ , ‘ h ‘ , ‘ i ‘ ] )   # appending another list to list1

print list1
list1 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] , [ ‘ g ‘ , ‘ h ‘ , ‘ i ‘ ] ]
list2 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] ]
notice, list2 remains unaffected, but if we make changes to child objects like :

list1[0][0] = ‘x’

then also list2 will be unaffected as all the child objects and parent object points to different memory location :
list1 = [ [ ‘ x ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] , [ ‘ g ‘ , ‘ h ‘ , ‘ i ‘ ] ] 
list2 = [ [ ‘ a ‘ , ‘ b ‘ , ‘ c ‘ ] , [ ‘ d ‘ , ‘ e ‘ , ‘  f  ‘ ] ]

So, here was a short article from my side to help you understand difference between both shallow and deep copy.
Hope it make some sense.

Happy Hacking 🙂

Advertisements

My talk at Fossasia Summit 2018


It was a great experience attending Fossasia summit 2018 in Singapore.
Fossasia summit link.
The best part was that I even gave a talk there titled “How Gcompris is impacting school education”.
GCompris is one of the project under KDE EDU , it’s basically a high quality educational software mainly focused on small children between age of 2 to 10.
GCompris contains many activities where each activity is a small game which helps small children to learn in a fun way through animation and graphics.

YouTube link for my talk there.
Link to slides.

It was a great experience representing KDE at such a nice event. I look forward to contribute to KDE in future again.

Thanks KDE community for your support.

Attaching some photos from the event .

IMG_20180328_233939_HDR

SWAG !!

IMG_20180324_142533_HDR

Life Long Learning Institute

IMG_20180324_174025_HDR

Inside Conference Hall

~ the end ~

Building your own Amazon Alexa Skill !!

                              Amazon_Alexa_Logo

It will be really great if you could control all your apps like your news feeds, score updates, etc with your voice. Amazon Alexa can help you achieve that.
Alexa is an intelligent personal assistant developed by Amazon, first used in the Amazon Echo and the Amazon Echo Dot devices developed by Amazon Lab126.

So just like any open source project, alexa also wanted developer’s around the world to help them innovate & develop awesome alexa skills. So, they have launched various program (which are rewarding :P) , so that developers around the world can showcase their creativity & tech skills to develop awesome alexa skills.
Link to program.

Recently, I also developed a skill. The skill is named as “Crypto Updater” (link).The skill basically tells user about latest price of crpto coins like bitcoin, ripple, litecoin, etc prices in US dollars.

The blog is basically about how to create your own custom alexa skill. So, let’s get started.

Steps :-
1.) make an amazon developer account (link).
2.) go to developer’s console and select Alexa.
3.) a list of your alexa skills will be shown here. Select “Add new skill”.
4.) here, there will be 5 different sections.
–> Skill Information
–> Interaction Model
–> Test
–> Publishing Information
–> Privacy & Compliance

Skill Information : here, you have to fill basic information like name of skill, Invocation Name, language, etc.

Interaction Model
: It is the place where all the magic happens :
a.) Intents : they are like name of the functions which will be called when you speak some particular sentences to Alexa.
b.) Utterances : now for each Intent name you have to define sample sentences or phrases which will help Alexa to understand which Intent to call.
c.) Slots : they are like variables which will get assigned to values specified by user who is using the skill.

So, for my skill I created two intents :-
–> intro : it will handle introduction of my skill like what it does , how to start, etc.
–> price_coin : this intent will handle all the logic when user asks about a coin price.

Now, after this I defined sample_utterances for each intent. For example, utterances for “intro” intent are like :
–> open {invocation_name},
–> launch {invocation_name}.
Similarly, utterances for “price_coin” intent are like:
–> “tell me latest price of {coin}”
–> “{coin} price”
here, {coin} can vary according to different coin name. So, here we will use slots which are like pre-defined values which that variable (coin in this case) can have. I have defined values of {coin} slot as name of 20 different crypto coins.

Finally, after defining these 3 properties, we have to write logic of each Intent. The logic will be written in AWS lambda function, i.e the code which will be executed when a particular intent is called. First, you have to make account on Amazon AWS portal, there in lambda section create your own lambda function using pre-defined Alexa template, which will write most of the boiler-plate code. You have to just focus on main logic of Intent. Sample of my lambda function.

Your lambda function will have an ARN number, in order to link your Alexa skill with your Lambda function, you have to insert this ARN id in “Configuration” section.
Note :- make sure you are in US East (N. Virginia) or EU (Ireland) region while creating your lambda function.

Test : You are almost done. Go to “test” section & test your skill response by typing different sample utterances for your Intent and check the response.

Publishing Information : Finally, go to “Publishing Information” section & fill out some description about skill.

At the end, apply for certification. Now, Alexa people will test & verify your skill on their standards , if your skill is lacking somewhere they will mail you the details within 1 or 2 days. Finally, if skill your is passed, it will be available in Amazon Alexa Skill Store and to alexa users.

Hope you will also come up with a great Skill.
Happy Hacking 🙂

Some handy MONGO DB query operators !

mongodb
MongoDB is an open-source document database and leading NoSQL database. It stores data in flexible, JSON-like documents, meaning fields can vary from document to document and data structure can be changed over time. It is one of it’s best attribute on why it is so popular nowadays among developers.
Example schema which we will use for our example. We will mainly deal with find query in this section.

So, in this blog i will highlight some of it’s find operators which I find quite useful.

1.) ‘$exists’ :- Let’s say in the above schema we have to search doc in which a particular key exists or not. For example, if we want to identify which doc is not having ‘city’ key we can use this operator.
Query : db.collection.find( { ‘ city ‘: { ‘$exists’ : false } } )
Desc : it will return the doc which do not have this key.
2.) ‘$in’ :-  The $in operator selects the documents where the value of a field equals any value in the specified array. Let’s say we want to select all those docs where city is either jaipur or delhi. So, in that case we can use this operator this way :
Querydb.collection.find( { ‘ city ‘: { ‘$in’ : [‘jaipur’,’delhi’] } } )

3.) ‘$nin’ :- $nin selects the documents where the field value is not in the specified array or the field does not exist. Let’s say we want to select all those docs where city is not jaipur as well as delhi. This operator comes handy in this case.
Querydb.collection.find( { ‘ city ‘: { ‘$nin’ : [‘jaipur’,’delhi’] } } )
Desc. : It will return third & fourth doc as there is no key named city in third and city is gurugram in fourth doc.

Hope the above info helps you in your hacking.
Keep rocking 🙂

 

My First PyCon !!!

PyCon-India-2017-in-New-Delhi-from-November-4-5-2017So, PyCon India 2017 took place in Delhi between 4-5 Nov 2017 at  Shaheed Sukhdev College of Business Studies. I was one of the attendee there and the experience was awesome and mind blowing. I have heard that your first Pycon is always special , but after experiencing it , yeah it is completely true.

So, this time there were three keynote speakers , all of them doing great things in their respective domain. So, the first day went like this for me :-

 

DAY 1 :-

1.) On first day  i.e 4 Nov, morning  the conference officially started with keynote from Noufal Ibrahim. Noufal is an industry veteran with over 15 years of experience. He founded Hamon and worked for Cisco systems, Synopsys, The Internet Archive and other organisations mostly in infrastructure engineering positions. His talk mainly focused on Importance and need of mentoring in open source world. Seriously, the talk was very good. And the best part was the oath that he asked everyone to take after the talk.

2.) There were many parallel talks taking place on different fields and topics , I decided to attend talks related to Application Development and Architecture. So, the second talk I attended was “HTTP Bottom Up – Live!” by Anand Chitipothu. The talk was very informative. It focused on how web-servers works with  example of gunicorn,  unicorn ,etc.

3.) The next talk I attended was Building Microservices With Firefly by Nabarun Pal. Firefly is an open source micro framework to deploy Python functions as web services. firefly was created with the aim of simplifying the deployment Machine Learning models as RESTful API. The talk was good and I found it very useful.

4.) Then the next event followed up was panel discussion on the topic “where python fails ?” . The discussion was really awesome. The participants of panel which included Peter Wang, Noufal Ibrahim and Anand Chitipothu. The discussion was really superb and awesome. Each panellist expressed  their views , pros & cons of this wonderful  language according to them. They compared python with JavaScript , another one of the most popular language in computer science. My perception and view point changed drastically after attending this clash of wonderful minds.

5.) Lunch 🙂 It was delicious.

6.) The lunch followed lightening talks by different developers about different things they explored during their journey with this wonderful language. Each speaker was given 5 min to present their ideas.

7.) The next talk followed a talk by Nicholas Romero regarding trending frameworks like Django, React , GraphQl and Relay.

8.) The final event was keynote – Python Community Principles By Elizabeth . Elizabeth Ferrao leads product management and developer advocacy for XapiX , data transformation tool for APIs, and is the co-founder of Women Who Code NYC, an 8k+ community of developers. She was very energetic and charismatic. Her talk mainly focused on importance of a good open source community and how to build it.

DAY 2 :-

1.) The day started with a awesome keynote talk by none other than Peter wang, I was waiting quite eagerly for this talk &  couldn’t afford to miss words of this wonderful person. The talk focused on how and why python is used in the DataScience field, what are the magical powers python beholds which makes it the most used computer science language in data science field.

2.) Tea , I needed it desperately 😛

3.)  Then I attended a talk by titled “Spinning Local DNS Server Sourcing Responses Over HTTPS To Combat Man-In-The-Middle Attack” by Arnav Kumar followed by another panel discussion on the topic “women in open source”.

4.) Closing ceremony took place.

So, this wraps up my first PyCon encounter and hopes to have more such wonderful experiences in future.
Till then, happy hacking guys 🙂

P.S : attaching some pics below :-

ElasticSearch : Diving into Scroll API for handling huge data records !!

Elasticsearch is a real-time distributed and open source full-text search and analytics engine which is mostly used because of fast retrieving of data from a huge pile of data records. It is basically a search engine based on lucene.

So, let’s say we made a search query in elastic like :-

POST /index_name/_search
{
“query”: {
“match”:
{
“field” : “value_to_match”
}
}
and suppose we get matching matching records more than 10,000. In that case elastic search will throw an exception saying that records are > 10,000 and will not give you the result as elastic search has inbuilt limit of 10,000 considering giving more than that result can take a lot of heap memory and can be dangerous.

So, elastic search has a feature to overcome this issue known as Scroll API.
While a search request returns a single “page” of results, the scroll API can be used to retrieve large numbers of results (or even all results) from a single search request, in much the same way as you would use a cursor on a traditional database.

The steps can be broken down as follows :-
1.) First you have to define a batch size and create a scroll context.Here, batch refers to the number of documents elastic search will return in each scroll api hit.
2.) Let’s say you have a result with 10 records(just for example) and you want to implement scroll api in this case with a bucket size of 2 records.
3.) So, first step is to create a scroll context like :-
elastic_1

Here “size” refers to the batch size and ‘1m’ refers context time. The scroll parameter (passed to the search request and to every scroll request) tells Elasticsearch how long it should keep the search context alive. Its value does not need to be long enough to process all data — it just needs to be long enough to process the previous batch of results. Each scroll request (with the scroll parameter) sets a new expiry time.

4.) The result of the above query is as follows :-
elastic_2

5.) The result can be explained as follows :-
–> “_scroll_id” :- scroll id which is used as a parameter to get the next set of results.
–> “total” :- total number of matching records for the given search query.
–>  “hits.hits” :- list containing batch data which will be 2 in this case as batch_size is specified as 2.

6.) So, we are finally left with 8 records after this query. So, in order to get the next set of data we have to query like this :-
elastic_3

The query has only two parameters the context for the next batch of data and the scroll_id received in previous api hit. Result :-
elastic_5.png

The result of the  above query will give the next set of result with a different scroll_id so we have to again make the above request with that scroll_id in order to get next set of data and so on till we get all the data records.

I have a written python script that implements and shows the demo of  elastic search.
You can find out the code at this link.

Hope you find the article useful and helpful.

Happy Hacking 🙂

I am going to Akademy’17 !!

Very happy to share, while writing this blog is that next month I will be attending Akademy 2017, the yearly KDE Community summit that is held since 2003 and which this year will take place in Almería, Spain from July 22th until July 27th 2017.

I am very grateful to KDE for providing me this wonderful opportunity to meet and connect with awesome people from the KDE community.Moreover, KDE has also provide me financial help by accepting my travel request so that I can travel to spain without any financial problem.

I am planning to schedule my travel from 21 july 2017 to 27 july 2017.Another good part of this event is that KDE has also provided me the opportunity to give a short talk(10 min) about my experience with open source world and how I am contributing to the community. So, I will give a short talk titled “Getting started with GCompris” which will highlight, how and when I started contributing to “GCompris” a awesome FOSS project under KDE.

akad
So, at last I am once again very grateful to the KDE community for providing me this wonderful opportunity.
See you in Almeria !!