CQRS in .Net

Hello everybody,

for now I want to make snapshot or dump of my brain about CQRS, so beware from reading of it, because it can be confusing.

Very simple explanation of CQRS

As usually if you see some kind of application, you can expect that what you see on the screen, you'll see it in db. It means if you see on screen Order, and order details, you can expect to see in database tables order and OrderDetails, or “one to one” relation between screen and DB.

Like this:

But with CQRS following changes can be applied:

Instead of two tables ( order and order details ), you will see three tables. Now make a pause and tell your assumption what can be their names? As one of the ways it can be Orders, OrdersDetails, ReadOrderDetails. And as you probably guessed ReadOrderDetails may represent whole screen of all of your order.

More detailed description of CQRS

So, what CQRS stands for? CQRS stands for Command/Query Responsibility sergregation.

What is command query?

Command = Writes of data

Queries = Reads of data.

First time when I heard it I was astouned, why??? Why to make the same entity so separated. But if to summarize it, you write infor in one part of db, but read from another part of db.

So, it gives you some side-effects. Consider some of them:

  1. data which you read become old after getting it from db

  2. number of reads > number of writes

  3. reads and writes go in different places

  4. commands modify data

  5. queries just get data

  6. read db as usually is not normalized

  7. read db as usually without joined tables

  8. ORM blur difference of what is in db.

CQRS as usually use something like event sourcing in order to track of what ever happened. As usually they work as append only table in db. In yet another words ES is recording of all state changes to the domain as series of events. Which give you following benefits:

a. detect bugs in domains

b. possibility to replay changes

c. current state always correct due to availability of known history of changes

If to summarize this part of my brain dump, 

  • huuuuge difference between reads and writes should be reflected in code
  • domain model must be unbinded from data store
  • single model can't represent all ( transactions, reporting, searching )

DDD or Domain Driven Design

Hello everybody,

today I want to share some glimplses of what is Domain Driven Design. 

One of the main ideas of DDD is to make the problems to guide software design, and not make software to affect problems of client. There is a popular joke, that computers help to solve problems, which didn't appear before inventing computers. 

So, the same with design of system. Design system in order to solve problem, not just use system to create another problems for business owner.

How this can be achieved in order to avoid displayed situation:

First of all, you should apply the following:

  • Ubiquitous Language
  • Bounded Contexts
  • Aggregate Root

I hope that the first bullet item is easiest to explain. I give you one example. Here in Ukraine, I volunteered for stationary company. And in Ukrainian following items are called file:

But for majority of programmers, file is as usually named part of infomration on disk. So it took time to distinguish what is file and what is Packing file (  not zip, huh ).

In phrase bounded context following is understood: depending from words around, different word from ubiquitous language can be applied. I can also name bounded context with word dialect. It's like speaking with user of your program you'll have one vocabulary, if speak with developer of some program you'll have another vocabulary. One of practical usages can be the following. Let's say you have 3 developers. Then one developer can work with accountant, another with security guy, and third with sellers, and each one can have his own solution, his own source code, and/or you can think about sharing some common source code between them.

And aggregates is compostions of objects. For example order is compostion of: products in order, prices of order, discounts of order, delivery details of order, type of order. As usually you can make three kinds of aggregate rules:  identity, reference, operation.

AngularJS inheritance in JS

Hello everybody,

today  I want to write short notice about another useful feature of AngularJS particularly about extend function.

So, according to comment in angular.js file function extend does the following:

You may wonder, why do I need this? 

Well, one of the reasons, is imitiating inheritance in some way, additional modularty, but without usage of prototype. Is it useful? I believe at least for somebody it will be.

AngularJS clean code advices

Hello everybody,

today I want to share with you fragment of knowledge from John Papa and his course at pluralsight entitled AngularJS Patterns: Clean Code.

Inside of course one of the videos he have following structure advice: Function, Inject, Register ( I will name it FIR  ).

function DashBoardController(service1, service2) {//function

    //here goes code of controller.

}

DashBoardController.$inject = ['service1', 'service2'];//inject

angular

    .module('app')

    .controller('DashBoardController', DashBoardController);//register

or in the form of image:

And as John Papa mentioned, Todd Motto also likes that approach:

"I prefer this sequence: write my function, inject the dependencies, ship it off into the app. It makes sense from hoisting and declarative prospectives".

 

But in one of projects where I used to work, company had another approach: Register, Inject, Function. You maybe can ask: "Why?". The answer is pretty simple: if your function is pretty big, then it can be not convenient to scroll it

Types of questions in Data Science

Hello everybody,

today I want to share with you some notes about Data Science.

In Data Science everything starts from giving questions, or as to say to giving right questions. Here it goes list of question which data scientists ask in order of rising complexity:

 

  • Descriptive

  • Exploratory

  • Inferential

  • Predictive

  • Causal

  • Mechanistic

 

So, the first goes descriptive analysis. You just describe what you see, and assume what that may be but not necessary is.

 

Exploratory analysis goes for searching relationship which you want to discover, but not necessary confirm them. EA is not final conclusion and shouldn't be use for generalizing/predicting.

 

Inferential analysis is something like mathematical induction. You take small part of data in order to make conclusion about all data. It can be compared to taking one spoon of soup in order to generalize about all soup.

 

Predictive analysis intended for taking some data from object A in order to predict behavior of object B.

 

Causal analysis intended what will happen to one variable if you change another. For example if you give some drug to person, will he live longer.

 

 

Mechanical analysis is very taught. Purposed to grasp how changes of some variables lead to changes in another variables of individual objects.

Mathematical notes about Neural networks

Hello everybody,

today I want to write few words about topic why mathematician believe that neural networks can be taught of something. Recently I've read book Fundamentasl of Artifical Neural Networks of Mohamad Hassoun and want to share some thoughts in more digestible manner with omitting some theoretical material.

 

As you heard, when first attempt of neural networks was invented ( aka Perceptron), society was very admired by them, until Marvin Minsky and Seymour Papert showed that Perceptrons can't implement XOR function or in generally speaking any non linear function.

 

It lead to big disappointment in area of neural networks.

But why? Because sometime one line is not enough in order to approximate some kind of function. So what is needed in that case? The answer is simple, to add another line.

 

 

Then question raised who can give guarantee that it is possible with help only lines to solve separability problem? This kind of guarantee become Stone-Weierstrass. And what if you want to separate your area not with help of lines, but with help of some more complicated curves? Where to go for? Is it possible to make separability bo something else? You will be surprised, but yes, and this kind of guarantee was granted to all of you with help of Kolmogorov theorem. Of course both of them have some kind of limitations of what you can expect to approximate, but in general Kolmogorov and Stone-Weierstrass theorems say that it is possible to approximate some function through combination of other functions or even as combination of other simpler functions, if you need.

.Net AngularJS Treeview lazy loading implementation

Hello everybody,

who follows my blog.

Today I want to share with you hierarchical tree view example which displays data as name implies in hierarchical way. There are plenty of tools that display data in hierarchical way with help of AngularJS, but not so many which has implemented lazy loading. Another part which is not common with implementation is joing AngularJS with server side API.

So if you ever find need to display some hierarchic information with lazy loading, you can consider my code as some kind of base, which you can extend.

 

Some details

As backend I have the following:

  1. MS SQL

  2. C# with Entity framework

  3. Web api

As frontend I have the following:

  1. AngularJS

  2. AngularJS component.

  3. Also I've added some modifications, which allows lazy loading.

 

 

If you want download source code and immediately execute it, I will disappoint you. First you'll need to create MS SQL database. Please in MS SQL create database “HierarchyDemo”. In the downloaded code you'll see file “hierarchy.xls”. You can import data from it into table “Hierarchy” and then use it.

Here is screenshot of how tree looks during loading:

The code can be downloaded from    here

Simple math behind neuron

Hello everybody,

today I want to share with you some ideas about activation functions in neural networks. 

But before I'll do this let's see simplified edition of neuron:

 

As usually many books describe following schema of working neuron:

ignals go into dendrites into neuron body. Neuron body does some kind of converting signal into another signal and sends output through axon to another neuron.

What is origin of signals, or where they come from? It can be any place of your body. It can be your eyes, it can be your nose, it can be touching of your hand, etc. Anything 

that your body signals is processed in the brain. 

So now, imagine, that you are mathematician and want to provide mathematical model of neuron? How you will represent it mathematically? One of the ways to implement it is following schema:

 

That is general picture of how neuron works. Just one more clarification is needed to say. Neuron not just takes input, but consider some inputs more important then another. Among neural networks developers

it is common practice to measure importance of some input as multiplier, which is named weight.

Take look at clarified mathematical model:

Pretty simple, huh? I can't tell for sure, but maybe in your had ther is a one neuron which remembers how much bread and how much butter you like.

 

Till know I didn't say what exactly processor does. To put simply processor can do the following:

1. Transfer furhter it's input. 

2. Modife input according to some rule and transfer it further.

What mean transfer input further? It means summ all values and send them to another neuron or command to your hand to put more butter or not. 

Before we consider second part let's try to give to our neuron task: figure out how much does it cost three different ingredients of your lunch. Let's say that your lunch consists of three ingredients:

bread, butter, tomato.

Neuron in that case look like this:

As a way of example in English you can hear statement like this: I put more weight on factor x and y, and then on factor z. And brains of people can work differently. As a way of illustration: neuron of one girl can pub big weight on rozes, while neuron of another girl can put more weight on chamomiles, and neuron of another girl can put more weight on chocolate or even tea. And task of boy is to find which weight is biggest. Then find second biggest weight and third, and then boy can start new stage of his life. Sometimes happen another change in life. Due to some reasons weights in neurons can change. And if somebody likes chocolate it will not always be the case. If somebody feels nice smell of his ( her ) favorite dish weight of chocolate can become smaller, and person decides to eat something. And after eating weight of chocolate can be restored again to pre-eating state. 

Imagine, that for few times you've bought sandwich with bread, butter and tomatos, but different amounts of it. Assume, that you bought bread, butter and tomato three times, and payed according to following table:

Bread, butter, tomato

Amount payed $

1,2,3

42

3,2,1

46

1,3,2

38

In this table first row means, that you bought one portion of bread, two portions of butter, and three portions of tomato. And you payed for this 42 $

Next time you bought three portions of bread, two portions of butter and one portion of tomato and so on.

How you can find how much does it cost each portion of bread, butter and tomato? One of the ways is to solve system of equations. But one neuron can't solve system of equations. All what it can do, is change weight which is multiplied on signal.

In the first case, signals where 1, 2, 3. In the second case signals where 3, 2, 1. In the third signals where 5, 6, 7.

Your neuron can behave in the following way:

1. Suppose that each one portion of tomato, butter and bread costs 15$.

2. If each portion costs 15$, than total value should be 1 * 15 + 2 * 15 + 3 * 15 = 15 + 30 + 45 = 90.

3. 90 is to much. Almost double value

4. Calculate error: Error = 90 - 42 = 48

5. Proportionally to size of portion decrease weight of portion.

6. As we have three inputs, we should divide 48/3 = 16.

7. New assumed values for each portion should be the following: sum of each portion is 6 ( sum of all elements of first cell in first row, or 1 + 2 + 3 )

8. We need to decrease the first weight value on 2.6 (16 / 3 * 1)

9. Decrease the second weight value on 5.3

10. Decrease the third weight on 7.8.

11. It gives us new values of weights: 12.4, 9.7, 7.2

And so on. If to continue this process, sonner or later you will have some approximation of how much costs bread butter and tomato in some error range.

If to draw comparison with real life, it can be compared to case, if you need to hit at something at three places in order it to be opened. 

And if to summarize what processor does, processor works as adder machine. Is it fine or some kind of silver bullet which feets to all live cases? Not always.

Consider the case. Sometime shops have the following price policy: if you buy more then three tomatos, then you'll get some kind of discount. 

Another case in shop can be if you have multiple discount for tomato, bread and butter. In that case linear function definetely will not feet.

But which functions can feet? 

You can try to consider following functions:

  a. Logistic sigmoid

Here is the formula:

And here goes sample of chart:

initially it become the most popular function in area of neural networks, but for now it become less popular. My personal preference is avoidance of it because it has small range (0;1)

  b. hyperbolic tangent

Formula:

tanh(x) = sinh(x)/cosh(z)

and chart:

for now this is my favorite function which I use in my neural networks areas. Just want to clarify what I mean when I say favorite, I mean I start from this function and if not satisfied with output I can move to another activation functions.

 

  c. Heaviside step

formula:

and image itself:

Functions like heaviside step as usually are good for classification tasks. 

Of course it is absolutely not complete list of activation functions for neural networks. Other activation functions are: Guassian, arctan, rectified linear function, SoftPlus, bent identity, etc.

So, if to summarize neuron architecture, it is summarizer with some kind of Activation function transformation. And you can try if you wish to make experiments with activation function which best feets to your needs.

 

Hierarchical storage of data in databases

Hello everybody,

today I want to write about hierarchical storage of information in databases.

As usually for storing hierarchy you'll have a choice: fast reading or fast writing. Fast reading as usually related with nested sets, and fast writing is related with adjacency. Also you can consider some kind of combination of both methods.

Following urls give good generalization of what you can have as good generalization

 

So for storing hierarchical data following features are fitting:

  1. Adjacency List:

    • Columns: ID, ParentID

    • Easy to do.

    • fast node moves, inserts, and deletes.

    • long to find level (can store as a computed column), ancestry & descendants (Bridge Table combined with level column can solve), path (Lineage Column can solve).

    • Use Common Table Expressions in those databases that support them to traverse.

  2. Nested Set (a.k.a Modified Preorder Tree Traversal)

    • Popularized by Joe Celko in numerous articles and his book Trees and Hierarchies in SQL for Smarties

    • Columns: Left, Right

    • Cheap level, ancestry, descendants

    • Compared to Adjacency List, moves, inserts, deletes more expensive.

    • Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work.

  3. Nested Intervals

    • Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. In the later development of this idea nested intervals gave rise to matrix encoding.

  4. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach)

    • Columns: ancestor, descendant

    • Stands apart from table it describes.

    • Can include some nodes in more than one hierarchy.

    • Cheap ancestry and descendants (albeit not in what order)

    • For complete knowledge of a hierarchy needs to be combined with another option.

  5. Flat Table

    • A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record.

    • Expensive move and delete

    • Cheap ancestry and descendants

    • Good Use: threaded discussion - forums / blog comments

  6. Lineage Column (a.k.a. Materialized Path, Path Enumeration)

    • Column: lineage (e.g. /parent/child/grandchild/etc...)

    • Limit to how deep the hierarchy can be.

    • Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path')

    • Ancestry tricky (database specific queries)

  7. Multiple lineage columns

    • Columns: one for each lineage level, refers to all the parents up to the root, levels down from the items level are set to NULL

    • Limit to how deep the hierarchy can be

    • Cheap ancestors, descendants, level

    • Cheap insert, delete, move of the leaves

    • Expensive insert, delete, move of the internal nodes

Jenkins and CI

Hello everybody,

few days ago I had chance of configuring Jenkins. 

I will ommit purpose and value of CI, just want to mention one error message, which you can find if install MSBuild plugin for building .sln file.

If you point to MSBuild file, I mean msbuild.exe file, you'll see interesting warning message:

d:\msbuildsln.bat is not a directory on the Jenkins master (but perhaps it exists on some slaves)

For  me purpose and value of this message is puzzle till know, but just want to write that you can ignore this message without any side-effects