Tuesday, September 27, 2022
HomeBusiness IntelligenceEnterprise Intelligence Elements and How They Relate to Energy BI

Enterprise Intelligence Elements and How They Relate to Energy BI


Business Intelligence Components and How They Relate to Power BI

Once I determined to put in writing this weblog put up, I assumed it might be a good suggestion to study a bit in regards to the historical past of Enterprise Intelligence. I searched on the web, and I discovered this web page on Wikipedia. The time period Enterprise Intelligence as we all know it in the present day was coined by an IBM laptop science researcher, Hans Peter Luhn, in 1958, who wrote a paper within the IBM Techniques journal titled A Enterprise Intelligence System as a selected course of in knowledge science. Within the Targets and rules part of his paper, Luhn defines the enterprise as “a group of actions carried on for no matter function, be it science, expertise, commerce, business, regulation, authorities, protection, et cetera.” and an intelligence system as “the communication facility serving the conduct of a enterprise (within the broad sense)”. Then he refers to Webster’s dictionary’s definition of the phrase Intelligence as the power to apprehend the interrelationships of introduced information in such a manner as to information motion in direction of a desired aim”.

It’s fascinating to see how a unbelievable thought prior to now units a concrete future that may assist us have a greater life. Isn’t it exactly what we do in our each day BI processes as Luhn described of a Enterprise Intelligence System for the primary time? How cool is that?

Once we speak in regards to the time period BI in the present day, we discuss with a selected and scientific set of processes of remodeling the uncooked knowledge into useful and comprehensible data for numerous enterprise sectors (akin to gross sales, stock, regulation, and so on…). These processes will assist companies to make data-driven choices primarily based on the prevailing hidden information within the knowledge.

Like every little thing else, the BI processes improved so much throughout its life. I’ll attempt to make some smart hyperlinks between in the present day’s BI Elements and Energy BI on this put up.

Generic Elements of Enterprise Intelligence Options

Typically talking, a BI answer incorporates numerous parts and instruments that will differ in numerous options relying on the enterprise necessities, knowledge tradition and the organisation’s maturity in analytics. However the processes are similar to the next:

  • We often have a number of supply methods with totally different applied sciences containing the uncooked knowledge, akin to SQL Server, Excel, JSON, Parquet information and so on…
  • We combine the uncooked knowledge right into a central repository to scale back the chance of constructing any interruptions to the supply methods by consistently connecting to them. We often load the information from the information sources into the central repository.
  • We remodel the information to optimise it for reporting and analytical functions, and we load it into one other storage. We intention to maintain the historic knowledge on this storage.
  • We pre-aggregate the information into sure ranges primarily based on the enterprise necessities and cargo the information into one other storage. We often don’t maintain the entire historic knowledge on this storage; as a substitute, we solely maintain the information required to be analysed or reported.
  • We create studies and dashboards to show the information into helpful data

With the above processes in thoughts, a BI answer consists of the next parts:

  • Information Sources
  • Staging
  • Information Warehouse/Information Mart(s)
  • Extract, Remodel and Load (ETL)
  • Semantic Layer
  • Information Visualisation

Information Sources

One of many predominant targets of working a BI undertaking is to allow organisations to make data-driven choices. An organisation might need a number of departments utilizing numerous instruments to gather the related knowledge every single day, akin to gross sales, stock, advertising, finance, well being and security and so on.

The information generated by the enterprise instruments are saved someplace utilizing totally different applied sciences. A gross sales system may retailer the information in an Oracle database, whereas the finance system shops the information in a SQL Server database within the cloud. The finance workforce additionally generate some knowledge saved in Excel information.

The information generated by totally different methods are the supply for a BI answer.

Staging

We often have a number of knowledge sources contributing to the information evaluation in real-world eventualities. To have the ability to analyse all the information sources, we require a mechanism to load the information right into a central repository. The primary purpose for that’s the enterprise instruments required to consistently retailer knowledge within the underlying storage. Subsequently, frequent connections to the supply methods can put our manufacturing methods susceptible to being unresponsive or performing poorly. The central repository the place we retailer the information from numerous knowledge sources is known as Staging. We often retailer the information within the staging with no or minor adjustments in comparison with the information within the knowledge sources. Subsequently, the standard of the information saved within the staging is often low and requires cleaning within the subsequent phases of the information journey. In lots of BI options, we use Staging as a brief atmosphere, so we delete the Staging knowledge commonly after it’s efficiently transferred to the subsequent stage, the information warehouse or knowledge marts.

If we wish to point out the information high quality with colors, it’s truthful to say the information high quality in staging is Bronze.

Information Warehouse/Information Mart(s)

As talked about earlier than, the information within the staging will not be in its finest form and format. A number of knowledge sources disparately generate the information. So, analysing the information and creating studies on high of the information in staging can be difficult, time-consuming and costly. So we require to seek out out the hyperlinks between the information sources, cleanse, reshape and remodel the information and make it extra optimised for knowledge evaluation and reporting actions. We retailer the present and historic knowledge in a knowledge warehouse. So it’s fairly regular to have lots of of thousands and thousands and even billions of rows of knowledge over an extended interval. Relying on the general structure, the information warehouse may include encapsulated business-specific knowledge in a knowledge mart or a group of knowledge marts. In knowledge warehousing, we use totally different modelling approaches akin to Star Schema. As talked about earlier, one of many major functions of getting a knowledge warehouse is to maintain the historical past of the information. This can be a huge profit of getting a knowledge warehouse, however this power comes with a value. As the quantity of the information within the knowledge warehouse grows, it makes it dearer to analyse the information. The information high quality within the knowledge warehouse or knowledge marts is Silver.

Extract, Transfrom and Load (ETL)

Within the earlier sections, we talked about that we combine the information from the information sources within the staging space, then we cleanse, reshape and remodel the information and cargo it into a knowledge warehouse. To take action, we comply with a course of known as Extract, Remodel and Load or, in brief, ETL. As you’ll be able to think about, the ETL processes are often fairly complicated and costly, however they’re an important a part of each BI answer.

Semantic Layer

As we now know, one of many strengths of getting a knowledge warehouse is to maintain the historical past of the information. However over time, retaining huge quantities of historical past could make knowledge evaluation dearer. For example, we may have an issue if we wish to get the sum of gross sales over 500 million rows of knowledge. So, we pre-aggregate the information into sure ranges primarily based on the enterprise necessities right into a Semantic layer to have an much more optimised and performant atmosphere for knowledge evaluation and reporting functions. Information aggregation dramatically reduces the information quantity and improves the efficiency of the analytical answer.

Let’s proceed with a easy instance to higher perceive how aggregating the information will help with the information quantity and knowledge processing efficiency. Think about a state of affairs the place we saved 20 years of knowledge of a sequence retail retailer with 200 shops throughout the nation, that are open 24 hours and seven days every week. We saved the information on the hour degree within the knowledge warehouse. Every retailer often serves 500 clients per hour a day. Every buyer often buys 5 gadgets on common. So, listed here are some easy calculations to know the quantity of knowledge we’re coping with:

  • Common hourly data of knowledge per retailer: 5 (gadgets) x 500 (served cusomters per hour) = 2,500
  • Day by day data per retailer: 2,500 x 24 (hours a day) = 60,000
  • Yearly data per retailer: 60,000 x 365 (days a yr) = 21,900,000
  • Yearly data for all shops: 21,900,000 x 200 = 4,380,000,000
  • Twenty years of knowledge: 4,380,000,000 x 20 = 87,600,000,000

A easy summation over greater than 80 billion rows of knowledge would take lengthy to be calculated. Now, think about that the enterprise requires to analyse the information on day degree. So within the semantic layer we mixture 80 billion rows into the day degree. In different phrases, 87,600,000,000 ÷ 24 = 3,650,000,000 which is a a lot smaller variety of rows to cope with.

The opposite profit of getting a semantic layer is that we often don’t require to load the entire historical past of the information from the information warehouse into our semantic layer. Whereas we’d maintain 20 years of knowledge within the knowledge warehouse, the enterprise may not require to analyse 20 years of knowledge. Subsequently, we solely load the information for a interval required by the enterprise into the semantic layer, which reinforces the general efficiency of the analytical system.

Let’s proceed with our earlier instance. Let’s say the enterprise requires analysing the previous 5 years of knowledge. Here’s a simplistic calculation of the variety of rows after aggregating the information for the previous 5 years on the day degree: 3,650,000,000 ÷ 4 = 912,500,000.

The information high quality of the semantic layer is Gold.

Information Visualisation

Information visualisation refers to representing the information from the semantic layer with graphical diagrams and charts utilizing numerous reporting or knowledge visualisation instruments. We might create analytical and interactive studies, dashboards, or low-level operational studies. However the studies run on high of the semantic layer, which provides us high-quality knowledge with distinctive efficiency.

How Totally different BI Elements Relate

The next diagram reveals how totally different Enterprise Intelligence parts are associated to one another:

Business Intelligence (BI) Components
Enterprise Intelligence (BI) Elements

Within the above diagram:

  • The blue arrows present the extra conventional processes and steps of a BI answer
  • The dotted line gray(ish) arrows present extra trendy approaches the place we don’t require to create any knowledge warehouses or knowledge marts. As a substitute, we load the information instantly right into a Semantic layer, then visualise the information.
  • Relying on the enterprise, we’d have to undergo the orange arrow with the dotted line when creating studies on high of the information warehouse. Certainly, this strategy is reliable and nonetheless utilized by many organisations.
  • Whereas visualising the information on high of the Staging atmosphere (the dotted pink arrow) will not be ultimate; certainly, it’s not unusual that we require to create some operational studies on high of the information in staging. instance is creating ad-hoc studies on high of the present knowledge loaded into the staging atmosphere.

How Enterprise Intelligence Elements Relate to Energy BI

To grasp how the BI parts relate to Energy BI, now we have to have a superb understanding of Energy BI itself. I already defined what Energy BI is in a earlier put up, so I counsel you test it out in case you are new to Energy BI. As a BI platform, we anticipate Energy BI to cowl all or most BI parts proven within the earlier diagram, which it does certainly. This part appears on the totally different parts of Energy BI and the way they map to the generic BI parts.

Energy BI as a BI platform incorporates the next parts:

  • Energy Question
  • Information Mannequin
  • Information Visualisation

Now let’s see how the BI parts relate to Energy BI parts.

ETL: Energy Question

Energy Question is the ETL engine out there within the Energy BI platform. It’s out there in each desktop purposes and from the cloud. With Energy Question, we will connect with greater than 250 totally different knowledge sources, cleanse the information, remodel the information and cargo the information. Relying on our structure, Energy Question can load the information into:

  • Energy BI knowledge mannequin when used inside Energy BI Desktop
  • The Energy BI Service inside storage, when utilized in Dataflows

With the combination of Dataflows and Azure Information Lake Gen 2, we will now retailer the Dataflows’ knowledge right into a Information Lake Retailer Gen 2.

Staging: Dataflows

The Staging part is obtainable solely when utilizing Dataflows with the Energy BI Service. The Dataflows use the Energy Question On-line engine. We will use the Dataflows to combine the information coming from totally different knowledge sources and cargo it into the inner Energy BI Service storage or an Azure Information Lake Gen 2. As talked about earlier than, the information within the Staging atmosphere will likely be used within the knowledge warehouse or knowledge marts within the BI options, which interprets to referencing the Dataflows from different Dataflows downstream. Needless to say this functionality is a Premium function; subsequently, we will need to have one of many following Premium licenses:

Information Marts: Dataflows

As talked about earlier, the Dataflows use the Energy Question On-line engine, which suggests we will connect with the information sources, cleanse, remodel the information, and cargo the outcomes into both the Energy BI Service storage or an Azure Information Kale Retailer Gen 2. So, we will create knowledge marts utilizing Dataflows. It’s possible you’ll ask why knowledge marts and never knowledge warehouses. The basic purpose relies on the variations between knowledge marts and knowledge warehouses which is a broader subject to debate and is out of the scope of this blogpost. However in brief, the Dataflows don’t at the moment help some elementary knowledge warehousing capabilities akin to Slowly Altering Dimensions (SCDs). The opposite level is that the information warehouses often deal with huge volumes of knowledge, rather more than the quantity of knowledge dealt with by the information marts. Bear in mind, the information marts include enterprise particular knowledge and don’t essentially include lots of historic knowledge. So, let’s face it; the Dataflows aren’t designed to deal with billions or hundred thousands and thousands of rows of knowledge {that a} knowledge warehouse can deal with. So we at the moment settle for the truth that we will design knowledge marts within the Energy BI Service utilizing Dataflows with out spending lots of of 1000’s of {dollars}.

Semantic Layer: Information Mannequin or Dataset

In Energy BI, relying on the placement we develop the answer, we load the information from the information sources into the information mannequin or a dataset.

Utilizing Energy BI Desktop (desktop utility)

It is strongly recommended that we use Energy BI Desktop to develop a Energy BI answer. When utilizing Energy BI Desktop, we instantly use Energy Question to connect with the information sources and cleanse and remodel the information. We then load the information into the information mannequin. We will additionally implement aggregations inside the knowledge mannequin to enhance the efficiency.

Utilizing Energy BI Service (cloud)

Creating a report instantly in Energy BI Service is feasible, however it’s not the advisable methodology. Once we create a report in Energy BI Service, we connect with the information supply and create a report. Energy BI Service doesn’t at the moment help knowledge modelling; subsequently, we can’t create measures or relationships and so on… Once we save the report, all the information and the connection to the information supply are saved in a dataset, which is the semantic layer. Whereas knowledge modelling will not be at the moment out there within the Energy BI Service, the information within the dataset wouldn’t be in its cleanest state. That is a superb purpose to keep away from utilizing this methodology to create studies. However it’s attainable, and the choice is yours in spite of everything.

Information Visualisation: Stories

Now that now we have the ready knowledge, we visualise the information utilizing both the default visuals or some customized visuals inside the Energy BI Desktop (or within the service). The subsequent step after ending the event is publishing the report back to the Energy BI Service.

Information Mannequin vs. Dataset

At this level, it’s possible you’ll ask in regards to the variations between a knowledge mannequin and a dataset. The quick reply is that the information mannequin is the modelling layer current within the Energy BI Desktop, whereas the dataset is an object within the Energy BI Service. Allow us to proceed the dialog with a easy state of affairs to know the variations higher. I develop a Energy BI report on Energy BI Desktop, after which I publish the report into Energy BI Service. Throughout my improvement, the next steps occur:

  • From the second I connect with the information sources, I’m utilizing Energy Question. I cleanse and remodel the information within the Energy Question Editor window. To date, I’m within the knowledge preparation layer. In different phrases, I solely ready the information, however no knowledge is being loaded but.
  • I shut the Energy Question Editor window and apply the adjustments. That is the place the information begins being loaded into the information mannequin. Then I create the relationships and create some measures and so on. So, the information mannequin layer incorporates the information and the mannequin itself.
  • I create some studies within the Energy BI Desktop
  • I publish the report back to the Energy BI Service

Right here is the purpose that magic occurs. Throughout publishing the report back to the Energy BI Service, the next adjustments apply to my report file:

  • Energy BI Service encapsulates the information preparation (Energy Question), and the information mannequin layers right into a single object known as a dataset. The dataset can be utilized in different studies as a shared dataset or different datasets with composite mannequin structure.
  • The report is saved as a separated object within the dataset. We will pin the studies or their visuals to the dashboards later.

There it’s. You may have it. I hope this weblog put up helps you higher perceive some elementary ideas of Enterprise Intelligence, its parts and the way they relate to Energy BI. I might like to have your suggestions or reply your questions within the feedback part beneath.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments