Branch Restriction In Automation Schedule In Acumatica

Branch restriction in Automation schedule in Acumatica

Hello everybody,

today I want to describe one behaviour of Acumatica processing screen. 

So, once upon a time I created processing screen. Purpose of that screen was simple: take data from some external source and insert it into Acumatica. 

As that processing screen worked fine, it was taken a decision to create automation schedule step that will make that screen to be executed by Acumatica automatically. And then following issue arised: also that screen worked great in manual mode, it didn't work at all in Automation schedule mode at all. 

After long investigation I found the following:

  1. Automation screens are executed from the user acount admin
  2. Also admin should have access to everything, that is not always the case with Acumatica. In my case admin account didn't have access to branches.

As result, nothing was imported. So, what was solution to that issue? Change scope to PXReadBranchRestrictedScope like this:

                var thr = new Thread(
                    () =>
                    {
                        using (new PXReadBranchRestrictedScope())
                        {
                            var portionsCustomers = customers.Skip(a * sizeOfOneChunk).Take(sizeOfOneChunk).ToList();
                            InsertCustomersFromList(portionsCustomers);
                        }   
                    }
                );

as soon, as I've applied PXReadBranchRestrictionScope life become easier, and data started to flow into Acumatica at automation schedule also. If to speak particularly about PXReadBranchRestrictionScope, it has following purpose: remove restriction by branch, that is automatically applied by default to current user. 

No Comments

Add a Comment

How Helping Others Can Transform Your Life

How helping others can transform your life

Here in Ukraine there is a joke: nothing spoils health of Ukrainian more then richness and welfare of his neighbor.

Also I often hear statement that helping others can transform your life to better. I've spent some time on finding the ways to help others. But helping not via giving money. As one man once said give somebody fish for two times and for third time that person will demand fish from you. So I wanted to be a person that can give fishing rod to people instead of just giving them fish.

As result I've decided to teach some close friends of mine programming. Well, it is not a surprise because I'm quite skillful in that area and what else can I teach? And you know, it changed my life to something that I never expected.

Lesson 1. Obedience

My first student was George. As usually I don't like to deal with relatives of my friends if I don't know them personally, but I decided not to be dogmatic and decided to give a try. George was very initiative and smart guy and one of his decisions was to go to programming courses. That was really good decision, and he learned plenty of staff in C#, as well as in html, CSS, Javascript. After he passed those courses I've made him job interview and decided to help him with landing of the job. It was first time in my life when I heard that for somebody lack of specialized diploma can be missing part. Not big minus, but something that person good to have if he doesn't have specialized education. I didn't ask George to go for degree, but decided to help him and gave him single recommendation: participate in development of some open source tool. I supposed that such kind of activity can help him to get needed experience. Meanwhile I myself started to search for some job that I can give to him. With time I found one guy at upwork that was willing to cooperate with both of us. So I was very happy, and asked George to make html/css markup exactly in solution, and myself started to program tricky C# part. And you know what? Every day I've asked George about results, he reported to me about good progress that he've made and our cooperation pretended to bring good results until.... Until I found that George just created html page instead of putting it according to way that I've asked him. As result missed deadline, negative impression about me on upwork and lesson #1:

  1. Obey to your mentor or superior even if you sure that you know better.

After that story I've never did something against what Team Lead told me to do until I've discussed with him my idea. I learned this lesson because I've seen how bad it is to say yes, I will do as you say and then do something similar to what you've said, but not exactly as agreed. Interesting that George finally become software developer, but it took him 5 years. So one more tiny lesson: commitment can bring you good result.

Lesson 2. It is hard to become a programmer.

My second student was another friend that for me looked as very gifted person. Let's name him John. He didn't have money to go on courses of programming, so I organized for him pluralsight subscription and made bi-weekly monitoring of his progress. And I can say the following: John is really gifted person, but not in programming, but in selling. After few months of training I've noticed that John could convince me that he really knows and understands parts of C#, but only after deeper analysis I was able to see that it wasn't the case. On the example of John I've realized that John can sell to viper it's own poison with some discount and viper will be happy. After that I've recommended to John to work for companies as sales manager. And you know what? He gets promotion after promotion as sales manager. I'd with to be as good in programming as John in selling. Most surprising part for me is that John almost didn't have any courses on how to do sales. While for programming he spent plenty big amount of time but he is really brilliant sales manager. So lesson 2:

Some people need much more time to become programmers

Lesson 3. Mentor can make mistakes, but allow him to fix them.

My third student was one more friend, let's name him Michael. Michael was probably the most obedient student that I ever had. What was the most impressive to me, that Michael had plenty of disappointments. For him it was really hard to learn programming. He had issues with understanding what is array, what is collection, what is class. Why on earth they are needed, why object oriented programming is needed, etc. Some time I become so exhausted from explaining him what to do that I've sometime had desire to give up. But luckily I and Michael didn't. After half of year Michael had his first job interview that he failed. Try to guess, why? Because he was good in understanding of language, but he was very bad in application of language. He weren't able to reverse string in C#. And that was mine error as well. I so much concentrated on training Michael with theory, that I've ignored main purpose of programming language: programming steps. As result after Michael's fail I've made a change to his program and added much more coding practice. It is worthy to mention that Michael now works as software developer to. So lesson 3:

Mentor can make mistakes and will make them, but allow to mentor fix mistakes.

Summary

I have little bit more mentoring students then three mentioned in this article, but just want to say the following:

  1. If you will help somebody to become a programmer, you yourself will become better programmer
  2. Our brain works differently when you try to understand something and when you explain it to somebody
  3. If you search for mentor keep in mind that he can make mistakes in you, but still he knows better in case if he is working programmer

I hope this article will inspire already working programmers to stretch helping arms to other people that want to enter software development industry as well as inspire those who search for helping arm realize that such hand can really exist.

No Comments

Add a Comment

How To Measure Quality Of Learning Part 2

How to measure quality of learning part 2

Hello everybody,

today I want to add few more notes about measuring of quality of learning, but today about tasks of classification. 

So, one of the ways can be measuring number of wrong answers. For example with usage of the following formula:

Imagine that your classification set has three possible labels: a (10 elements ), b ( 15 elements ), c ( 20 elements ). And let's say that your model wrongly classified 2 out of a, 3 out of b and 4 out of c. In that case following formula is applicable:

Historically it happend that in classification tasks it is common to maximaze function, while in regression learning vice versa. 

Another common measurement of quality of classification is accuracy. Formula is like this:

That is very simple measuremnt of quality, and it is widely used. But it has some side effects. Let's consider few examples.

Unbalanced samplings

Consider following example. Let's say you have 1000 elements in sampling. And in your sampling 950 elements belong to class a, and 50 elements belong to class b. And let's say you've built a model that gives as output always class a. That model is useless, because it doesn't reproduce anything in data. But accuracy of that constant algorithm will be 0.95. But let's say you still want to use accuracy for measurement. What kind of output can be considered as reasonable? In our sample accuracy should be in range [0.95, 1], and not [0.5, 1] as in case of binary classification. 

Consider another example. Let's say you want to build a model which will give advice: to give loan or not to give a loan. And lets say you have two models:

Model 1:

  • 80 loans returned
  • 20 loans not returned

Model 2:

  • 48 loans returned
  • 2 loans not returned

Which model is better? What is worse? Give loan to bad customer, that will not return loan, or not give loan to good customer that can return loan? Looks like littlbe bit more characteristics are needed. Accuracy doesn't take into account value of error. Something more is needed to be taken into account. Also adding of precision allows us not to treat all objects as elements of one class, because in that case we will get increase of False Positive.

Error matrix

Consider following error matrix:

y = 1 (belongs to class 1) y = -1 ( belongs to class 2 )
a(x) = 1

True Positive (TP)

Correct trigering

False Positive (FP)

Wrong triggering

a(x) = -1

False Negative (FN)

Incorrect skipping

True Negative (TN)

Correct skipping

So, we have two columns: y=1 and y=-1. In case if model treat element as something belonging to y=1 we can say that model worked. If modle treat element as belonging to y = -1 we can say that model skipped element. In such a way we have two kinds of errors: inoccrect triggering and wrong skipping. 

Consider then following example. Let's say we have 200 objects: 100 belongs to class 1 and another 100 belongs to class -1. And take a look at following confusion matrixes:

Model 1:

y = 1 y = -1
a1(x) = 1 80 20
a1(x) = -1 20 80

Model 2:

y = 1 y = -1
a2(x) = 1 48 2
a2(x) = -1 52 98

And question: which one is better? Model 1 or model 2?

For such purposes we can use two characteristics: precision and recall. For me personally recall can be also characteristics of completeness of memorization.

Formula for precision:

and for recall:

now if to put those values into calculation we'll receive the following:

precision(a1,x) = 0.8

recall(a1,x) = 0.8

precision(a2,x) = 0.96

recall(a2,x) = 0.48

From those standpoint we see that second model is more precise but with sacrifice of recall. In other words if second model triggered we can be sure with bigger degree of confidence in correctness of it's result. Or in another words you can interpret precision as part of objects that classifier interpret as positive and they are really positive, while recall says to you which part of objects of class 1 model really found.

How to use precision and recall?

And now what? How can you use precision and recall? For example like this. You have a task of loan scoring. It can sound like: un returned loans number should be smaller then 5%. It terms of todays formula it looks like:

precision(a, X) ≥ 0.95. And your task is to maximize recall. 

Another example. You should fine not less then 80% of sick people in some set. In terms of formulas it look like this:

recall(a, X) ≥ 0.8 , and you maximize precision.

Unbalanced sampling

y = 1 y = -1
a(x) = 1 10 20
a(x) = -1 90 1000

imagine that you've get result as displayed in table above. It has wonderful accuracy: 

accuracy(a, x) = 0.9

but precision and recall help you to see real picture:

precision(a, x) = 0.33

recall (a, x) = 0.1

Should you use this model? Definetely not!!!

No Comments

Add a Comment

Coefficient Of Determination

How to measure quality of learning

Hello everybody,

Today I want to describe some ideas about measure quality of learning. 

First of all I want to point areas where you can apply those measurements. It can be in three areas:

  1. For setting funtional during learning
  2. For picking hyperparameters
  3. For evaluation of ready made model

Another way can be combination. You can measure quality during learning with one measurement, but final model you can analyze with other measurement. 

MSE

So, let's start with most common formula: mean squared error:

In words it reads the following: difference between prognozed value and desired value, squared, summed and finally averaged. 

MSE has following featues:

  1. Easily minimizable
  2. Punishes stronger for bigger mistakes

What it means in practice? If your learning data set has many anomalies, then MSE is definetely not a point to apply. But if your data is without anomalies then MSE is really good choice. Because MSE will learn anomalies. And you don't want your model to learn anomalies, right?

MAE

Another point of choice for data scientists is mean averaged error:

In words it is averaged module of difference between desired output and actual output.

It has following features:

  1. Harder to minimize
  2. Less punishes for bigger mistakes

In practice it means that if your learning set has plenty of anomalies then MAE is one of the functions to consider.

Coefficient of detemination

Means squared error has interesting modification, coefficient of determination. Take a look at formula:

where  is average answer value

The main part of coefficient is fracition, in which numerator has sum of squared deviations, while denominator has sum of deviations of answers. 

So, what coefficient of determination explains? It explains which part of dispersion is explained or modeled in whole dispersion of answers. It's value is also interpretable. 

It has following features:

For workable models coefficient of detemrnation is between zero and 1. 

If coefficient of detemrnation is equal to 1 then we built ideal model.

If coefficient of detemrnation is equal to zero, then model is like constant value

If coefficient of detemrnation is smaller then zero then model is worse then constant value.

Asymetric error

Consider following scenario. You are owner of shop that sells laptops. And you have following question mark for yourselves: what amount of laptops to preorder. Another question mark that you get is maybe it's better to have little bit more laptops then needed? For such a cases maybe you can consider stronger punishment for under forecast then over forecast. One of the example of functions that can be used is quantile error. Take a look at formula:

It looks pretty complicated, so let's go in some details of it.

Parameter τ ∈ [0, 1] defines for what to punish stronger: for over forecast or under forecast. 

If τ is closer to 1 then model will be punished for under forecaset, otherwise for over forecast. 

If this formula looks complicated below goes visual explanation:

So,

step #1 calculate difference between desired output and model output

step #2 choose multiplier

step #3 in case of under forecast we multiply by τ - 1 and summ it. In case of over forecast multiply on τ.

No Comments

Add a Comment

Invokeifrequired Template

InvokeIfRequired template

Hello everybody,

today I want to document simple but very useful feature if you work with multiple threads in Winforms application. 

Quite often it happens that you execute in some paralel thread long running calculations and would like time from the time notify results to UI. 

But if you try to do this then you'll get an error that will say to you that parallel thread doesn't have permissions to some control because it didn't create such a control. So, how then update UI?

The answer is simple, you should use method Invoke of the control. In that case everything inside of method Invoke will be executed from UI thread. 

Needless to say that such approach is workable but to some degree cumbersome. So in order to simplify life I've created following extension method:

public static class Extensions
{
    public static void InvokeIfRequired(this ISynchronizeInvoke obj, MethodInvoker action)
    {
        if (obj.InvokeRequired)
        {
            var args = new object[0];
            obj.Invoke(action, args);
        }
        else
        {
            action();
        }
    }
}

And then in order to update text box field following logic was enough:

txtLog.InvokeIfRequired(() =>
{
    txtLog.Text += "\r\n check following id SubAccount: " + foundItem.RowId;
});

No Comments

Add a Comment

Different Types Of Search In Acumatica

Different types of search in Acumatica

Hello everybody,

recently friend of mine gave me wonderful question:

In PXSelect command, I saw Search, Search2, Search3… keywords, please explain the difference.

Thats really good question which shows his attentiveness to details.

So, no let's go part by part.

Targets

First of all, Search statement can be applied to those kinds of attributes: PXSelector, PXDbScalar and PXDefault. 

Also you can apply Search statement to cases when you updated something in cache of Acumatica and what to reopen that part. It can look like this:

Document.Search<POOrder.orderNbr>(currentPoOrder.OrderNbr, currentPoOrder.OrderType);

or like this:

[PXDefault(typeof (Search<Company.baseCuryID>))]

Don't mix attribute use with search in graph. The first form you can use in your methods, while second you can use in DAC class. 

As attributes

If to speak about attributes Search allows you to set specific value of of field in your DAC class. In other words it selects exact field rather then a record. The field specification goes as paramter. If to speak about syntax it is identical to PXSelect.

Following search options exist:

Type of search Description
Search<Field> Gets field value
Search<Field, Where> Gets field value with filtering by Where condition
Search<Field, Where, OrderBy> Gets field value with filtering by Where condition and ordering
Search2<Field, Join> Gets field value with filtering by using Joins with other tables
Search2<Field, Join, Where> Gets field value with filtering by using Joins with other tables and applying where condition
Search2<Field, Join, Where, OrderBy> Gets field value with filtering by using Joins with other tables and applying where condition and ordering
Search3<Field, OrderBy> Gets field with ordering application
Search3<Field, Join, OrderBy> Gets field value with joins and order by application
Search4<Field, Aggregate> Gets aggregated field value
Search4<Field, Where, Aggregate> Gets aggregated field value with filtering by where condition
Search4<Field, Where, Aggregate, OrderBy> Gets field value with filtering by where, aggregation and order by
Search5<Field, Join, Aggregate> Gets field value fiwth application of joins and aggregateds
Search5<Field, Join, Where, Aggregate> Gets field value with application of joins and where and aggregate
Search5<Field, Join, Where, Aggregate> Gets field value based on join, where and aggregate condition
Search6<Field, Aggregate, OrderBy> Gets field value based Aggregate and order by
Search6<Field, Join, Aggregate, OrderBy> Gets field value based on join, aggregate and order by
Coalesce<Search1, Search2> Gets field value with using Search1 or if Search1 gives null uses Search2

I hope with this table you can now better understand which search to use

7 Comments

  • Dmitrey Makarov-Paton said

    Thanks, Yuriy, great post. Please post more. Pls, can you clarify one thing though?

    Case 1. (DataView.Search) Does the query executed against the database every time or it searches the caches first? What happens on the consecutive calls to the search.

    Case 2.
    Does the query executed when the item inserted into the cache only.? What happens on the consecutive calls also. Does the cache of the type specified is searched?

  • docotor said

    Hello Dmitrey,
    for case 1 queries will be executed first time against db, other times if possible against cache in order to minimize load on db.
    For case 2.I'm not sure that I understand it. Can you please add a bit more description to your question

  • Dmitrey said

    Hi Yuriy, thank you for answering. I'm observing a different behaviour though. Perhaps you can spot an issue. Please see my contrived example below. My observations as follows...
    1. DataView.Search always executes SELECT TOP(1) ... for a new key value. Regardless, where there items of this type in the cache.
    2. Same behaviour for Search in PXDefault

    Please try the code below. Set up SQL PROFILER to for RPC:Competed & SQL:BatchCompleted events.
    Thanks. Keep up with your awesome posts.

    public class DataContext : PXGraph<DataContext>, IDisposable {

    public PXSelect<PriceList> PriceList;
    public PXSelect<ParkingLot> ParkingLots;

    public void Init() {
    PriceList.Select();
    }

    public void Dispose() => this.Clear();
    public ParkingLot CreateParkingLot() => (ParkingLot)ParkingLots.Cache.CreateInstance();
    }


    public class Test {

    public void DoThings1() {

    using (DataContext dc = PXGraph.CreateInstance<DataContext>()) {
    dc.Init(); // Select all prices

    int parkingLotId = 2;
    dc.ParkingLots.Select(); // Database is quired, how many records you have in db now in cache
    ParkingLot item = dc.ParkingLots.Cache.Cached.Cast<ParkingLot>().FirstOrDefault(row => row.ParkingLotID == parkingLotId);
    Debug.Assert(item != null, "Must in the cache");
    dc.ParkingLots.SetValueExt<ParkingLot.grade>(item, "B"); // Your other example! item's status does not change to Updated!!

    //.....
    var instance1 = (ParkingLot) dc.ParkingLots.Search<ParkingLot.parkingLotID>(parkingLotId); // Database queried again. SELECT TOP(1) ... WHERE ID=1
    Debug.Assert(Object.ReferenceEquals(instance1, item), "Same");
    Debug.Assert(instance1.ParkingLotID == item.ParkingLotID, "Same");

    //....
    var instance2 = (ParkingLot) dc.ParkingLots.Search<ParkingLot.parkingLotID>(parkingLotId); // 2nd call, Database NOT queried here unless you change ID to 2.
    Debug.Assert(Object.ReferenceEquals(instance1, instance2), "Same");


    var instance3 = dc.ParkingLots.Insert(dc.CreateParkingLot().With(r => r.Grade = "C")); // Triggers query on pxdefault
    Debug.Assert(Decimal.Compare(instance3.PricePerDay.Value, 5) == 0);


    var instance4 = dc.ParkingLots.Insert(dc.CreateParkingLot().With(r => r.Grade = "B")); // Triggers query on pxdefault
    Debug.Assert(Decimal.Compare(instance4.PricePerDay.Value, 10) == 0);
    }
    }
    }


    public class PriceList : IBqlTable {
    public abstract class grade : IBqlField { }
    [PXDBString(1, IsKey = true)]
    public string Grade { get; set; }

    public abstract class price : IBqlField { }
    [PXDBDecimal]
    public decimal? Price { get; set; }
    }

    /// <summary>
    /// </summary>
    [Serializable]
    public class ParkingLot : IBqlTable {
    public abstract class parkingLotID : IBqlField {}
    [PXDBIdentity(IsKey = true)]
    [PXUIField(DisplayName = "ID", Enabled = false, Visible = false)]
    [PXDefault]
    public virtual int? ParkingLotID { get; set; }


    public abstract class grade : IBqlField { }
    [PXDBString(1, IsFixed = true)]
    [PXDefault("A")]
    public virtual string Grade { get; set; }


    public abstract class pricePerDay : IBqlField { }
    [PXDBDecimal()]
    [PXDefault(typeof(Search<PriceList.price, Where<PriceList.grade, Equal<Current<ParkingLot.grade>>>>))]
    public virtual decimal? PricePerDay { get; set; }
    }


    create table [dbo].[PriceList](
    [Grade] [nchar](1) NOT NULL PRIMARY KEY,
    [Price] [decimal](18, 2) NOT NULL,
    )

    create table [dbo].[ParkingLot](
    [ParkingLotID] [int] IDENTITY(1,1) NOT NULL primary key,
    [Grade] [nchar](1) NULL,
    [PricePerDay] [decimal](18, 2) NULL,
    )

  • docotor said

    Hi Dmitrey,
    thanks for valuable test and comment. You are right.

  • docotor said

    Just want to add. The query that always execute against database are PXSelectReadOnly and it's modifications. Other kind of queries can execute against db, but also can omit db, and read everything from cache or part from cache and part from db.

  • Dmitrey Makarov-Paton said

    Thanks Yuri. Where do you work? What's your skype/email?

  • docotor said

    My skype is zaletskiy. For now I work as remote contractor and I mainly specialize on Acumaitca extending. I've send you an email message also.

Add a Comment

Update Database Error On Switching From Net Core 1 1 To 2 0

Update-Database Error on switching from .Net core 1.1 to 2.0

Hello everybody,

today I want to share some strange behaviour that I faced. 

Recently I needed to switch from .Net core 1.1 web app to .Net core 2.0 app. 

I found over internet that simplest way to achieve it will be just opening project in Visual Studio 2017 and VS will switch your project by itself. I decided to give to such attempt a try. 

Initially all went fine. Visual Studio 2017 gave me very nice looking report which convinced me that life is easy and wonderful. Actually it said that project was switched to .Net core 2.0 with success.

Then I've tried to execute in package manager console Update-Database command. Unfortunately I've seen following error message:

An error occurred while calling method 'ConfigureServices' on startup class 'Startup'. Consider using IDbContextFactory to override the initialization of the DbContext at design-time. Error: This method could not find a user secret ID because the application's entry assembly is not set. Try using the ".AddUserSecrets(string userSecretsId)" or ".AddUserSecrets<TStartup>()" method instead.
No parameterless constructor was found on 'ApplicationDbContext'. Either add a parameterless constructor to 'ApplicationDbContext' or add an implementation of 'IDbContextFactory<ApplicationDbContext>' in the same assembly as 'ApplicationDbContext'.

After some research I found following proposal to apply:

instead of line 

builder.AddUserSecrets();

use line 

builder.AddUserSecrets<Startup>();

and for my suprise Update-Database reported to me that it was successful. 

No Comments

Add a Comment

How To Modify Approve And Reject Actions In Purchase Orders Screen

How to modify Approve and Reject actions in Purchase orders screen

Hello everybody,

today I want to share some knowledge about interesting feature of Acumatica: Approve and Reject actions in Purchase orders screen.

When I was asked how long it will take to modify behaviour of Approve and Reject actions, I've thought it will be easy task. Find appropriate Actions, overload then and enjoy life. But with those two actions life is more complicated. 

After speaking with Acumatica support I've realized that those two actions are declared as Automation steps, so in order to work with those actions it will be needed to look into knowledge about Automation steps. The only memeber that has relation to those actions are type of EPApprovalAutomation. 

So, in order to modify behaviour of those two actions following code snipped is useful:

public class POOrderEntryExt : PXGraphExtension<POOrderEntry>
{
    public override void Initialize()
    {
        Base.FieldVerifying.AddHandler<POOrder.rejected>((s, a) =>
        {
            if ((bool?)a.NewValue == true)
            {
                if (Base.Document.Ask("Custom Warning""Do you want to proceed?"MessageButtons.YesNo) != WebDialogResult.Yes)
                {
                    string errorMessage = "The Reject operation was canceled";
                    PXUIFieldAttribute.SetError<POOrder.approved>(s, a.Row, errorMessage);
                    throw new PXSetPropertyException(errorMessage);
                }
            }
        });
    }
}

Suppose that you also need to modify captions on button. For example caption of button Reject. I have step by step picture manual which you can use for your purposes:

No Comments

Add a Comment

How To Create Learning Set For Neural Network In Deeplearning4j

How to create learning set for neural network in deeplearning4j

Hello everybody,

today I want to document one simple feature of Deeplearning4j library. Recently I had an assignment to feed into neural network for Deeplearning4j.

If your learning set is not big ( later I'll explain what big means ) then you can put all your data into INDArray and then based on that you can create DataSet. Take a look at fragments of XorExample.java:

1         // list off input values, 4 training samples with data for 2
2         // input-neurons each
3         INDArray input = Nd4j.zeros(4, 2);
4 
5         // correspondending list with expected output values, 4 training samples
6         // with data for 2 output-neurons each
7         INDArray labels = Nd4j.zeros(4, 2);

Above Deeplearning4j team just reserved some small memory for learning.

Next goes filling information:

// create first dataset
// when first input=0 and second input=0
input.putScalar(new int[]{0, 0}, 0);
input.putScalar(new int[]{0, 1}, 0);
// then the first output fires for false, and the second is 0 (see class
// comment)
labels.putScalar(new int[]{0, 0}, 1);
labels.putScalar(new int[]{0, 1}, 0);

// when first input=1 and second input=0
input.putScalar(new int[]{1, 0}, 1);
input.putScalar(new int[]{1, 1}, 0);
// then xor is true, therefore the second output neuron fires
labels.putScalar(new int[]{1, 0}, 0);
labels.putScalar(new int[]{1, 1}, 1);

// same as above
input.putScalar(new int[]{2, 0}, 0);
input.putScalar(new int[]{2, 1}, 1);
labels.putScalar(new int[]{2, 0}, 0);
labels.putScalar(new int[]{2, 1}, 1);

// when both inputs fire, xor is false again - the first output should
// fire
input.putScalar(new int[]{3, 0}, 1);
input.putScalar(new int[]{3, 1}, 1);
labels.putScalar(new int[]{3, 0}, 1);
labels.putScalar(new int[]{3, 1}, 0);

After that they create DataSet with all inputs, outputs:

// create dataset object
DataSet ds = new DataSet(input, labels);

I will skip neural network creation and configuration because purpose of this post is just explain about locating in memory learning set.

What is big?

As I mentioned initially what is big amount of data in Deeplearning4j. I'll explain with example. RAM amount on my server is 256 Gb. Let's mark it with variable ramAmount. 

I want to feed into memory 800 files, 2703360 bytes each. In total they will take 800 * 2703360 ~ 2 Gb. 

But when I applied Xor approach to mine dataset I've continiously got following error message:

Exception in thread "main" java.lang.IllegalArgumentException: Length is >= Integer.MAX_VALUE: lengthLong() must be called instead
at org.nd4j.linalg.api.ndarray.BaseNDArray.length(BaseNDArray.java:4203)
at org.nd4j.linalg.api.ndarray.BaseNDArray.init(BaseNDArray.java:2067)
at org.nd4j.linalg.api.ndarray.BaseNDArray.<init>(BaseNDArray.java:173)
at org.nd4j.linalg.cpu.nativecpu.NDArray.<init>(NDArray.java:70)
at org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory.create(CpuNDArrayFactory.java:262)
at org.nd4j.linalg.factory.Nd4j.create(Nd4j.java:3911)
at org.nd4j.linalg.api.ndarray.BaseNDArray.create(BaseNDArray.java:1822)

as far as I grasp from mine conversations with support Deeplearning4j attempts to do the following: create one dimensional array which will be executed on all processors ( or video cards ). In my case it wasn possible only and only when my learning set was not 800, but something around 80. That is far less then waht I wanted to use for learning. 

How to deal with big data set?

After realizing problem I had again dig deeper into Deeplearning4j samples. I found very useful sample of RegressionSum. There they create data set with help of function getTrainingData. Below goes source code of it:

 1 private static DataSetIterator getTrainingData(int batchSize, Random rand){
 2         double [] sum = new double[nSamples];
 3         double [] input1 = new double[nSamples];
 4         double [] input2 = new double[nSamples];
 5         for (int i= 0; i< nSamples; i++) {
 6             input1[i] = MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
 7             input2[i] =  MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
 8             sum[i] = input1[i] + input2[i];
 9         }
10         INDArray inputNDArray1 = Nd4j.create(input1, new int[]{nSamples,1});
11         INDArray inputNDArray2 = Nd4j.create(input2, new int[]{nSamples,1});
12         INDArray inputNDArray = Nd4j.hstack(inputNDArray1,inputNDArray2);
13         INDArray outPut = Nd4j.create(sum, new int[]{nSamples, 1});
14         DataSet dataSet = new DataSet(inputNDArray, outPut);
15         List<DataSet> listDs = dataSet.asList();
16         Collections.shuffle(listDs,rng);
17         return new ListDataSetIterator(listDs,batchSize);
18 
19     }

As you can see from the presented code, you need to

  1. create one or more input arrays.
  2. Create output array. 
  3. if you created more then one input arrays then you need to merge them in one array
  4. Create DataSet that has inputs array and outputs array
  5. Shuffle (  as usually this improves learning )
  6. Return ListDataSetIterator

Configure memory for class in intellij idea

If you have hope that adventures with memory were completed I need to disappoint you. There were not. Next step that is needed for Deeplearning4j is configuration of available memory for particular class. Initially I got an impression that this can be done via edition vmoptions file of  In intellij idea. But that assumption is wrong. You'll need to configure memory for particular class like this:

1. select your class and choose Edit Configurations:

2. Set some memory like presented at screenshot:

IN my case I've used following line for memory: 

-Xms30G -Xmx30G -Dorg.bytedeco.javacpp.maxbytes=210G -Dorg.bytedeco.javacpp.maxphysicalbytes=210G

Keep in mind that parameters -Dorg.bytedeco.javacpp.maxbytes should be equal to -Dorg.bytedeco.javacpp.maxphysicalbytes. 

One more final detail to keep in mind, you'll also will need to think about parameter batchsize that you feed into neural network while configuring MultiLayerNetwork.

No Comments

Add a Comment

How Override Persist Method In Acumatica

How override Persist method in Acumatica

Hello everybody,

today I want to show sample of code on overriding Persist method in Acumatica. 

Consider following scenario, you need to modify saving logic of screen Purchase Orders in Acumatica. How you can achieve this? Following steps can help you to do this:

  1. Create extension class for POOrderEntry
  2. Override Perist method

Both of those details implemented below:

public class POOrderEntryExt : PXGraphExtension<POOrderEntry>
{
 
    [PXOverride]
    public void Persist(Action del)
    {
        //Here you can add some of your code that should be executed before persisting PO Order to database
        del();
    }
 
}

With such simple steps you can modify persisting logic to any needed behaviour. Or even turn it off.

No Comments

Add a Comment