It has been a year since I realized that, in this highly entropic universe, my sisters are the strongest and most stable place to which I belong. A place where I was always present but never able to see it.

So, how did my sisters play this role? Or in general, how do one's sisters play this role?

Throughout my existence, one thing I remember is the warmth, caring, and love poured into me by my sisters. This laid the foundation for me as a child. This is true when it comes to infants or children; besides food and great playtime or relaxation, these three elements are the ones that lay the foundation.

I am a really hyperactive kid, and I still consider myself one! Why not? I mean, I see everyone as just grown-up children who were hurt throughout their lives by other hurt children. This is true as everyone is figuring out what life means. If you look at life through the lens of an adult, you'll mostly find yourself sad and depressed. But if you look at the world through the lens of a child, not only will you learn new things, but you'll also be much happier.

And that's my mantra: living in the moment and holding my sisters' hands while looking at what life brings me.

So, I have 5 sisters and each of them has given me 5 unique strengths -

Pooja Didi - I have learnt sincerity from her.

Vinti Didi - I have learnt caring from her.

Arti Didi - I have learnt fighting from her.

Shruti Didi - I have learnt creativity from her.

Kriti Didi - I have learnt calmness from her.

If you pay attention, these 5 elements are all what a person needs in order to grow as a good person.

I am fortunate enough to have sisters like these who always kept holding my hand. <3

Moreover, I also found two more sisters, Arya and Ayushi, though the relation is still in the infancy stage as I found them via my brother Ayushman as they are his sisters, so I also consider them as my sisters too. But, man they both are so simple and pious yet if they are any where around you believe me you will never miss the boredom.

Back in 2014 when I used to take my programming classes, I spent around one and a half months just understanding how loops work. That includes printing out different patterns, cascading loops and looking at how memory is updated at each level of loop with only pen and paper, no digital device allowed!

From past few years, one thing is kept bothering me which is the “programming tutorials” on the YouTube.com teaching everything in just 5 minutes. I was like, how is this even possible that you can understand something in just 5 minutes?

These presenters say that a loop is something that repeats for a certain amount of time. Even they do use the word iterate sometimes, but they claim that it means the same as repeat.

Which isn't true at all, what I remembered, if I am correct. The meaning of the word Iterate means that – keep doing the task with the updated value to achieve the truth at the end, meanwhile repeat means keep doing the same thing with no change.

"Iteration" comes from the Latin "iter" meaning "journey". Iteration is performing the same task at different places or with different circumstances as part of a longer sequence. It is not doing the exact same thing (that would be reiteration - to repeat a journey). The word "Iteration" always conveys the sense of progress, improvement, growth or change.

Repetition, on the other hand, is to perform the exact same action, with no change of circumstance. Unlike "Iteration", "Repetition" does not suggest progress or movement forwards.

Life and loops are similar in terms of initiation, maintenance, and termination.

Initiation is starting at a point and maintaining ourselves.

Maintenance is keeping healthy and working to avoid getting stuck.

Termination is the final stage of a loop.

Iterations or repetitions in a loop can involve using different values each time.

Initialization, Maintenance, and Termination are the three properties of a loop.

Initialization is the starting point of the loop.

Maintenance is the process of iteration in the loop.

Termination is the condition that ends the loop.

Induction is the process of integrating or inducing an idea.

In induction, the truth value of a base is used to project outcomes.

Induction can be encountered in college admissions or choosing a major.

Induction is similar to iteration or integration with different values.

Loop and induction have a crucial difference

Loop does not use mathematical induction

Induction involves proving something step by step

State machine and mathematical induction are different because state machine is finite while mathematical induction is not.

The state machine terminates and is finite, just like us.

Mathematical induction is not finite and does not end.

State machine can be proven through experience and achievements.

Looping and maintaining a base is crucial in state machine.

Mathematical induction is different from Marvel's explanation of Quantum.

Explanation of insertion sort using playing cards

The book uses the example of playing cards to explain the loop used in insertion sort.

The choice of playing cards as an example highlights the thought process of the writers.

The authors were philosophical in selecting the card example.

The card example is common worldwide and has two colors – red and black.

The use of red and black in the example is influenced by the black and white screens used in older days.

Insertion sort is an in-place sorting algorithm that sorts values by inserting them in their correct position.

Sorting a deck of cards in a game

The players couldn't figure out the rules of the game

They randomly started sorting the cards

They ended up with a complete sorted deck of 52 cards

The ability to sort the deck determined the outcome of the game

The example is taken from the book CLRS.

Meanwhile, the video which I recorded explaining what and how loops precisely works.

In short, loops are nothing but finite Mathematical Induction.

Saturday 5 August 2023

My favourite Computation book

I still remember I bought this book at the year-end of 2017 after learning a good amount of Python from the book Learning Python by Mark Lutz I started watching the lecture of Prof. John Guttag on MIT OpenCourseWare. One of my favourite parts of that book was the Monte Carlo simulation.

Overall if anyone is interested it's a fair book to go with.

Wednesday 21 June 2023

What Bipartite Graphs actually do?

This is a question which kept going in my head for years, so I decided to look into how bipartite graphs actually works? I mean what they are exactly used for?

So, I started learning discrete mathematics again where I found a link between different optimization technics and how they are correlated with each other and with the Bipartite Graphs also.

I came-up with the example on the fly which is related to the mating selection among animal species and how it helps in better understanding of this particular graph method.

Below, are some of the points which are taken from my video, and I hope this makes the understanding of optimization approaches in real life.

Studying bipartite graphs in discrete mathematics

Discovered bipartite graphs while studying discrete mathematics

Bipartite graphs have two disjoint sets with no connections within the sets.

Optimizing decision making through bipartite graphs

Graphs are used to optimize decision making in various scenarios such as traveling to a city

Bipartite graphs help make efficient connections between subsets, ensuring happiness for both males and females.

Bipartite graphs help in making connections between large number of nodes efficiently.

By dividing nodes into two subsets, connections can be made between them without having to connect each node individually.

This method is particularly useful when dealing with a large number of nodes, such as in matchmaking scenarios.

Using optimization to match characteristics and eliminate irrelevant species

Approach involves a probabilistic way of thinking

Eliminating irrelevant species to focus on specific characteristics.

Using the Hungarian method to optimize matching based on weights

Consider weights of meals and match with females based on proximity

Eliminate unnecessary paths to optimize matching

Preference is important in both males and females

Evolutionary biology has made males and females have preferences

Preference is important in various scenarios like railway ticket reservation

Consider preferences in seat selection using stable marriage algorithm

Weighted Hungarian method doesn't consider preferences. (Weights are given to the edge and preferences are given to the nodes)

Stable marriage algorithm considers both weights and preferences.

Bipartite graphs are used to optimize connections between two disjoint sets.

Removing certain connections heuristically to optimize the graph.

Introduction of the weights and preferences so we have to optimize connections further.

First, we are going to start with logic because it is a branch of mathematics and philosophy concerned with reasoning and argumentation which means that based on this reasoning we have practically working applications in our computers be it the correctness of computer programs, ensuring the quality and reliability of software systems, prove the security of cryptographic algorithms and many many more. So, we have our reasoning in place the only thing which is left now is to validate our reasoning and for that, we need something called Proof. Proofs, in mathematical logic, are a series of logically deduced statements used to establish the truth of a proposition or theorem (once we prove a mathematical statement is true, we call it a theorem) and the concept of a proof is used to demonstrate the correctness of algorithms and to establish the validity of logical systems. The examples for both of them combined are in the vast array be it optimizing schedules, finding the shortest path between two points, determining the most efficient way to use resources, choosing between different options, evaluating risks and benefits, and weighing trade-offs.

Starting from the very first thing in logic is a statement, also called a proposition, which will be declarative in its nature. That is it can be either true or false. In propositional logic, we use symbols to represent propositions, and logical connectives (such as "and", "or", "not", "implies", etc.) to build complex expressions.

Examples of valid propositions-

New Delhi is the capital of the Indian Republic.

1+1 = 2 (There is a 300-page book Principia Mathematica for this)

Jang (my friend) was born in Manipur.

Examples of invalid propositions-

Do this!

Stand-up!

What a beautiful day!

Is the sun hot?

Two things are here- FIRST since in mathematics we usually deal with problems(statements) in the form of variables to avoid writing those lengthy statements and we are going to use p, q, r, s, . . . as the variables to represent the individual statements. It's not compulsory but is somewhat like a standard which is followed by people and even in programming we tend to use i, j, k, . . . as the inner variable in loops.

SECOND is to create new propositions which are statements constructed by combining one or more propositions. You can think of this in a way for example your friend said to you that he talked to this very girl/boy and your response to that information highly probably is no way you talked to her/him.

Let's try to represent this in the above-discussed paras -

let p be the statement said by your friend

p = I talked to this girl/boy.

let q be the statement which you said to your friend

q =You can't talk to that girl/boy.

Do you see any inference from here?

The thing which I wanna convey from here is that the very thing which you usually do with your friend is to oppose what they'd said just to tease them. But, what it actually does is this starts to build your conversation between you and your friend i.e. makes your conversation complex and fun.

This is exactly what we do here once we have our statements in place we try to make them more complex so that we can optimise and better solution in the end.

Let p be a proposition. The negation of p, denoted by ¬p

p: Jang is a good PC gamer.

¬p : Jang is not a good PC gamer.

Now, we've done converting the statements into the variables and somewhere started making statements more complex by combining more statements together.

We are now at the point of combining both of them into the tables which are known as a truth table.

A truth table is a table used in logic and mathematics to evaluate the truth value of a proposition or an argument. A truth table lists all possible combinations of truth values for the propositions involved and shows the resulting truth value of the whole statement.

A truth table has one row for each possible combination of truth values of the propositions, and the columns represent the propositions involved in the statement. The last column of the truth table contains the true value of the whole statement, based on the truth values of the propositions.

Here, is the Truth table for the Negation of a proposition

p

¬p

T

F

F

T

Now, if you pay a little attention to your own language there are always some connectivities so that you can create more complex propositions.

Such basic two are also available here and they are and & or.

They are represented by ∧ & ∨(also known as conjunction & disjunction)

In the English language connectivity or is taken in two ways what I mean is sometimes when it's used it can also include both sentences i.e inclusive or sometimes it refers that only one sentence out of the connected two sentences can be valid not both at the same time i.e. excluding.

So, the inclusive or is represented with the same symbol above but to make a distinction for the exclusive or we use the symbol ⊕

Now, we have seen conjunction, disjunction and Xor it's time to see -

Conditional statements are nothing but if p then q which means the first thing only happens when the sufficient condition of the next dependent statement occurs and this is represented as p → q.

There are some variations to the same -

“if p, then q”“p implies q”“if p, q”

“p only if q” “p is sufficient for q”

“a sufficient condition for q is p”“q if p”

Here's an example of a conditional statement:

"If it rains, then I will carry an umbrella."

In this example, the event is "it rains" and the consequent is "I will carry an umbrella." If it rains, then the event is true and the whole statement is true. If it doesn't rain, then the event is false and the whole statement is true.

There are three more things which are related to this Converse, Contrapositive and Inverse. Let's look at them -

The proposition q → p is called the converse of p → q.

The contrapositive of p → q is the proposition ¬q → ¬p.

The proposition ¬p → ¬q is called the inverse of p → q

Here's an example to illustrate the concepts of converse, contrapositive, and inverse:

"If it is raining, then the streets are wet."

Converse: "If the streets are wet, then it is raining."

Contrapositive: "If the streets are not wet, then it is not raining."

Inverse: "If it is not raining, then the streets are not wet."

There is something also called a biconditional statement and it's represented by p ↔ q. It's true when both p & q have the same truth values.

Predicates and Quantifiers are fundamental concepts in mathematical logic.

A predicate is a function that maps elements of a set to truth values (true or false).

Predicates can be used to describe the properties of elements in a set. For example, the predicate "x is an even number" can be applied to elements of the set of integers to determine whether they are even or odd.

Quantifiers are symbols used to express the number of elements in a set that satisfy a predicate. The two most common quantifiers are the universal quantifier (∀) and the existential quantifier (∃).

The universal quantifier (∀) expresses that a predicate is true for all elements in a set. For example, "for all x in the set of natural numbers, x > 0" can be expressed as ∀x (x > 0).

The existential quantifier (∃) expresses that there exists at least one element in a set that satisfies a predicate. For example, "there exists a natural number x such that x > 10" can be expressed as ∃x (x > 10).

Rules of inference

why rules of inference are needed in logic and proofs?

Rules of inference are needed in logic and proofs because they provide a systematic method for deducing new conclusions from given premises. These rules are based on logical relationships between propositions, and they allow us to determine whether a conclusion logically follows from the premises.

By using rules of inference, we can build logical arguments that are valid and can be used to prove theorems, solve problems, and make decisions.

Me Devansh after many years finally started doing what I always wanted to do and that is to share my knowledge in a way how I learn and understand things.

This tutorial is prepared for those that need assistance in Disk Scheduling Algorithms. INTRODUCTION

In operating systems, seek time is very important. Since all device requests are linked in queues, the seek time is increased causing the system to slow down. Disk Scheduling Algorithms are used to reduce the total seek time of any request. PURPOSE
The purpose of this material is to provide one with help on disk scheduling algorithms. Hopefully with this, one will be able to get a stronger grasp of what disk scheduling algorithms do.

There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look at some of the important terms: Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write. So the disk scheduling algorithm that gives minimum average seek time is better. Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads. So the disk scheduling algorithm that gives minimum rotational latency is better. Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred. Disk Access Time: Disk Access Time = Seek Time + Rotational Latency + Transfer Time TYPES OF DISK SCHEDULING ALGORITHMS
Although there are other algorithms that reduce the seek time of all requests, I will only concentrate on the following disk scheduling algorithms:

First Come-First Serve (FCFS)

Shortest Seek Time First (SSTF)

Elevator (SCAN)

Circular SCAN (C-SCAN)

LOOK

C-LOOK

These algorithms are not hard to understand, but they can confuse someone because they are so similar. What we are striving for by using these algorithms is keeping Head Movements (# tracks) to the least amount as possible. The less the head has to move the faster the seek time will be. I will show you and explain to you why C-LOOK is the best algorithm to use in trying to establish less seek time.
Given the following queue -- 95, 180, 34, 119, 11, 123, 62, 64 with the Read-write head initially at the track 50 and the tail track being at 199 let us now discuss the different algorithms.

1. First Come -First Serve (FCFS)
All incoming requests are placed at the end of the queue. Whatever number that is next in the queue will be the next number served. Using this algorithm doesn't provide the best results. To determine the number of head movements you would simply find the number of tracks it took to move from one request to the next. For this case it went from 50 to 95 to 180 and so on. From 50 to 95 it moved 45 tracks. If you tally up the total number of tracks you will find how many tracks it had to go through before finishing the entire request. In this example, it had a total head movement of 640 tracks. The disadvantage of this algorithm is noted by the oscillation from track 50 to track 180 and then back to track 11 to 123 then to 64. As you will soon see, this is the worse algorithm that one can use.

Advantages:

Every request gets a fair chance

No indefinite postponement

Disadvantages:

Does not try to optimize seek time

May not provide the best possible service

2. Shortest Seek Time First (SSTF) In this case request is serviced according to next shortest distance. Starting at 50, the next shortest distance would be 62 instead of 34 since it is only 12 tracks away from 62 and 16 tracks away from 34. The process would continue until all the process are taken care of. For example the next case would be to move from 62 to 64 instead of 34 since there are only 2 tracks between them and not 18 if it were to go the other way. Although this seems to be a better service being that it moved a total of 236 tracks, this is not an optimal one. There is a great chance that starvation would take place. The reason for this is if there were a lot of requests close to each other the other requests will never be handled since the distance will always be greater.

Advantages:

Average Response Time decreases

Throughput increases

Disadvantages:

Overhead to calculate seek time in advance

Can cause Starvation for a request if it has higher seek time as compared to incoming requests

High variance of response time as SSTF favours only some requests

3. Elevator (SCAN) This approach works like an elevator does. It scans down towards the nearest end and then when it hits the bottom it scans up servicing the requests that it didn't get going down. If a request comes in after it has been scanned it will not be serviced until the process comes back down or moves back up. This process moved a total of 230 tracks. Once again this is more optimal than the previous algorithm, but it is not the best.

Advantages:

High throughput

Low variance of response time

Average response time

Disadvantages:

Long waiting time for requests for locations just visited by disk arm.

4. Circular Scan (C-SCAN) Circular scanning works just like the elevator to some extent. It begins its scan toward the nearest end and works it way all the way to the end of the system. Once it hits the bottom or top it jumps to the other end and moves in the same direction. Keep in mind that the huge jump doesn't count as a head movement. The total head movement for this algorithm is only 187 track, but still this isn't the mose sufficient. 5. C-LOOK This is just an enhanced version of C-SCAN. In this the scanning doesn't go past the last request in the direction that it is moving. It too jumps to the other end but not all the way to the end. Just to the furthest request. C-SCAN had a total movement of 187 but this scan (C-LOOK) reduced it down to 157 tracks.

From this you were able to see a scan change from 644 total head movements to just 157. You should now have an understanding as to why your operating system truly relies on the type of algorithm it needs when it is dealing with multiple processes.

NOTE: It is important that you draw out the sequence when handling algorithms like this one. One would have a hard time trying to determine which algorithm is best by just reading the definition. There is a good chance that without the drawings there could be miscalculations.

Programs related to these Algorithms -

/* Implementation of DISK Scheduling Algorithm using FCFS. Data structure used - ARRAY. Implemented by - Devansh Varshney GitHub ID - varshneydevansh */ #include<iostream> #include<math.h> // For abs() using namespace std; int main(void) { int chart[100],result[100]; //Data Structure used int head,i,n,sum=0; cout<<"\n\tEnter the number of processes \n\t"; cin>>n; cout<<"\n\tEnter the processes number \n\t"; for(i=0;i<n;i++) {cin>>chart[i]; cout<<"\t";} //Gettin' processes in the chart cout<<"\n\tEnter the HEAD number \n\t"; cin>>head; for(i=0;i<n;i++) { result[i]=abs(chart[i]-head); head=chart[i]; } for(i=0;i<n;i++) { sum+=result[i]; } cout<<endl; for(i=0;i<n;i++) { cout<<"\t"<<result[i]; } cout<<"\n\tSUM IS:\t"<<sum<<endl; return 0;