Friday 30 March 2018

SAP BO: Universe designer: IDT (Information Design Tool)

Today i would like to discuss on the topic which is of business objective or we say business intelligence now a days which is IDT. Every one who is in this domain is well familiar that we do use 2 widely known tools to date in BO i.e. UDT as well as IDT. The thing which actually matters is that what our client wish and with what we are more comfort in designing of universe in BO domain. So, let's start with IDT (information design tool ) today in this post.

I hope you understand it and it could make your work easy that you are doing.

               ------------IDT-------------

It is similar like that of older version of SAP BO 3.1 which was known as UDT (UNIVERSE DESIGN TOOL) and there the universe was saved in ‘.unv’ and here it is saved in ‘.unx’ extension. More over IDT is “TAB BASED” while the previous version was not like that and was “windows based”.

Let’s start from beginning and that is setting up of connection. The connection is either seted up or we will set a new connection. In my case I had created an ODBC connection to a MS access data base.

For this in my computer I go to local drive C and in that I opened the folder of Windows inside which I had a sub folder which contains several extensions and from there I clicked on the odbcad32.exe extension and a new window get opened up which is ODBC data source administrator.

In this I went to the first tab that was USER DSN which contain user data source and I clicked BPRM CONNECTION which have SQL SERVER as driver. Then I clicked the next tab that was SYSTEM DSN in which I clicked BOE05_ADVENTURE WORK and I added it. After clicking on add button a new window further open up which is CREATE NEW DATA SOURCE and then I scrolled down and clicked on MICROSOFT ACCESS DRIVER (“.mdb”) and click on finish button. After this a new window further opens up which is ODBC MICROSOFT ACCESS SETUP in which I provided the data source name as “MY SALES” and added a description to it and then clicked on SELECT which will further open up a new window called as SELECT DATABASE and I selected the database and then I clicked ok button and in this way my new connection got ready.

Then I opened “SAP BUSINESS OBJECT INFORMATION DESIGN TOOL” which contains 2 layers i.e. BUSINESS LAYER & FOUNDATION LAYER. In the IDT we can see when it open up that in left hand side there are list of local projects on top and down to that there is repository resources list.

Above the local project on the top left corner file option is there and we click on it and then go to new and then click NEW PROJECT. Give the project name as I had gave the name as “MY SALES UNIVERSE” and clicked finish button.

After that in local project section that is on LHS we can see “my sales universe”. Right click on it and go to new option or go back to file option on top and click on “relational connection” that I did and a new window got opened which was “NEW RELATIONAL CONNECTION” under which in resource name I gave the name as “MY SALES UNIVERSE” and I had given a brief description to it and then clicked next button which opened to “DATABASE MIDDLEWARE DRIVER SELECTION” in which there were list of drivers and I clicked on “MICROSOFT” under which I clicked “MS ACCESS 2003” under which there were ODBC drivers and then I clicked next button and we could see “parameters” section in which I clicked on test connection and it shows a result that “TEST SUCCESSFUL” and then finally clicked finish.

Then again I clicked on my sales universe on left side under local project  and right click on it then clicked new option under which I clicked on data foundation option and a new window get opened up which is “RESOURCE NAME”. in resource name I had given name as “MY SALES UNIVERSE DATA FOUNDATION” and clicked on to next button and now we could see “SELECT DATA FOUNDATION TYPE” and there are 2 options that is “single source” and “multi source enabled” in which I had taken single source connection and clicked on next button and then select the connection and click on finish button.

On RHS pane of IDT right click tables and then click insert tables. A new window get open from where we can select the tables that we require for the project. I had taken tables like (SALES FACT, ITEM DIMENSION, COUNTRY DIMENSION, and DATE DIMENSION) and then I click finish and these 4 tables got appeared on the “foundation layer “of IDT.

Then I created joins that I used to do in UDT. Then go the top clicked on action under which I clicked detect and check the cardinalities so that if there is any sort of error that could be checked.

Then I went again to the my sales universe on left side under local project and right clicked on it and clicked business layer which opened up a new window called as “NEW BUSINESS LAYER” IN WHICH THERE ARE TYPE OF DATA SOURCE I AND I selected relational data source after that again gave the resource name as “MY SALES UNIVERSE” after that next option that comes is that data foundation section in which we select and enable ÄUTOMATIC CREATE CLASS AND OBJECT”AND then click finish.

Then after this we could see business layer on LHS with class and objects in that and we can even delete any class or object if required or else we leave it. We can even add new objects under class if required. For example as I added ÄMOUNT OBJECT” from “SALES FACT TABLE” under a class named as “measure”

After this I again right clicked on my sales universe under local project and clicked on publish to repository. We can also publish to local folder. As I published to local folder in my project. After this a new window get open which is named as “PUBLISH A UNIVERSE TO A LOCAL FOLDER” in that I checked the integrity in which I clicked check all and then clicked next and finish. It will show universe published successfully.

After this I opened up the WEBI i.e. web intelligence in which I selected the universe option coz I had created the data in universe. We can also fetch data from excel or Bex queries. In my case I fetched data from the universe that I created for my project.

Then click on refresh universe and we could see “MY SALES UNIVERSE .UNX” click on it and open it. After that it opened up with class and objects on left side. In the right side it have options boxes like result objects, query filter, data preview.

We can now drag and drop objects from left to right side to result object box as per requirement. Like I had taken region name country name date year item name and amount and then I click on run the query button on right side top. After that we can see the report for the query objects that we have run.

Now I had given a name to my report like “MY SALES REPORT” and then we can do any sort of modifications in that like I added filters and hierarchies and prompts and variables and formulas and operators and hyperlink and etc.… and then I saved the document and publish finally.

Information Design Tool Flow Chart


Universe Design Tool Flow Chart

Business Intelligence: Performance Tuning

Performance is such a word that we use in our every day life. We all need to optimize our work and enhance the performance day to day to be ahead and updated in life.

Similarly in Business Objects we need to enhance the performance or can say that to optimize things for fast running of the reports for the business requirements.

In Business Objects, the performance enhancements can be done in 4 basic levels which i would be sharing with you. Since after the development work been done, and there are cases of performance issue in the large and complex reports that we create for the end users. Basically, the best practice is to check for the performance in the report level then further move to universe level and then further to database and server level. As we basically focus on the reporting side, so it is the best practice to do as much as possible optimization been done over the reports developed in Web Intelligence so as to not touch the universe and the complex database.

Reporting Level: Web Intelligence (WEBI).

To enhance the performance of the webi reports we need to see things such as


  1. Reduce the JAR files loading. Since in BI4.0 split the single JAR file into 60+ individuals JARs for easy development and updating. Previously in BO versions like in 3.1 version contain single JAR file (thin cadenza.jar). It is best practice to upgrade to BI 4.1 SP3+ for smooth running of WEBI reports.
 *JAR file is 44MB and it takes time to get loaded.
  1. We must avoid using large reports in a document. Best practice is to use 10 reports per document or maximum up to 20 reports. Apart from this we must check the number of rows per document i.e. it must not cross 50,000 rows or we can say 2500 rows per reports.
  2. The smaller the report, the faster it runs. So to avoid delay during run time and analysis of reports we must see that specific business needs are fulfilled and rest are been removed. It also enhance the refresh time along with improve the performance while doing modification in the reports.
  3. Sometime the reports contains charts along with tables which make the reports large and refresh issue comes. So in this cases we can go for using of the report linking feature of WEBI. We can create one report with tables and other with charts and further link them with one another.
* we can link reports together using open document linking and a combination of prompts and filters apart from Hyperlink wizard that is available in HTML interface.
Scenario: (Linking)
Report 1 contains the summary of the sales total for all the branches of company across the nation and it is then scheduled to run every night which takes around 15 minutes to get scheduled. Users can view the latest instance of report 1 which takes few seconds to load and users can drill down into sales total of each branch which will launch a report 2 that will display only the data of the particular branch drilled on.
  1. we can limit the number of data provider and it is best practice to use less than 15 data provider per document.
  2. Cache enhance the load time of documents, so we must check it that it is not been disabled.
  3. While we enable the auto width and height in format cell option of webi, it makes the full document to be calculated during navigation time and finally make it slow and decrease the performance. So best practice is to avoid it as much as possible.
Ex – suppose we have 10 reports in a document and we wish to jump from page 2 to             page 10 then we can see that it is slow to navigate to page 10.
  1. Many of the webi reports contain charts with data points. If much data points are been used then it effect the APS i.e. adaptive processing server which makes the report run slow.
  2. Try to avoid nested sections in a report as it degrade the performance of the reports. When we open the format section window of webi and enable the button showing hide section when the following are empty. We must not enable it.
  3. We must use the test query drill for drill down reports. The option called use query drill can be found in properties of the report. Enable the use query drill which would utilize the database instead of local data and thus reduce the amount of data stored locally for drill session.
  4. instead of scope of analysis we can also go for report linking since while we apply scope of analysis it retrieve extra data from the database which is stored in cube and used for drill down purpose.
  5. While we use formula like for each and for all in reports, we can use In instead of for each and for all. Similarly, instead of using WHERE operator we can go for using of the if then else which can faster the performance.

Universe Level: Semantic Layer

  1. While creating universe we must try to use the required objects to build up the universe design to meet the business requirement.
  2. Keep the query simple as possible and we must avoid creating often more complex query i.e. building up the query that only contain the required objects in the documents.
  3. Checking of array fetch size optimization is the way to enhance the performance of universe design as array fetch size sets the maximum number of rows with each fetch from the database.
  4. While applying the filters, we must prefer to go more with query filters rather than report filters since, query filter modify the SQL query to restrict the data fetched and further displayed while report filters simply modify the displayed data.
Ex – a sales report with year is running and the user wish to see only the November data. So in this case, by using of query filter the WHERE clause we get only the data of November month rather than other months. But while using the report filter we can get the same data of November month but rest of the month data is there hidden in the cube.
  1. In the BI 4.0 onward version, query stripping is the new component that is been integrated which enhance the performance by, eliminating the un used objects the query. The option for query stripping has to be enabled in the report, query and in universe level. While using BICS connection stripping is by default but in other connection we have to enable it.
  • While using relational database, the parameters that has to bet set for query stripping are allowing query stripping option selected in business layer. Enabling the query stripping option selected in query properties of the WEBI document. And enabling query stripping option select in document properties of WEBI document.
  • Use of enhanced query stripping parameter will only optimize the SELECT and GROUP BY clause and will not modify the joins and other clauses.
  1. We must go for only merge dimension that are required in the universe.
  2. While we generates WEBI reports, there are many servers which are involved such as adaptive processing server, connection server, web intelligence server. So we must check these servers going into CMC to optimize it so that there is hassle free running of webi reports.

DATABASE LEVEL OPTIMIZATION - 

Examine the execution plan of SQL: Determine the execution plan of BO generated SQL in target database. EXPLAIN PLAN is a handy tool for estimating resource requirements in advance. It displays execution plans chosen by Oracle optimizer without executing it and gives an insight on how to make improvements at database level.

SERVER LEVEL OPTIMIZATION - 

If the performance of system deteriorates when reports are accessed by larger number

of users over web, then fix the problem at fourth level i.e., server level (Level 4).

-> Scalable System

-> Event Based Scheduling

-> Report Server/Job Server closer to database server

-> Maximum Allowed Size of Cache

SAP Business Intelligence Admin: Basics Q&A

Admin is a person who is responsible to manage the entire server level tasks in Business Object/Intelligence. In this blog i would be sharing few of the question and answers at a beginner/entry level.

I expect that these question and answers on admin would help you during the interview. These are the set of basic questions that a interviewer can ask you in SAP BO/BI admin.

I would like you to know what are all the responsibilities that the admin must have in Business Object domain. I have classified the responsibilities as per "TIME HIERARCHY FORMAT" i.e. day to day, weekly, monthly, occasionally and rarely.  These are as follows - 

  1. Daily duties:

  • All BO users logins and Security checks (Who is accessing what) verification
  • Monitoring BO server including Test (WMNTBO and WMNT BO1 ) running with optimal performance
  • Monitoring all DB services
  • Testing the availability of all Universe connections to respective Databases
  • Verifying site properties/user activities/ BO module activities from Web intelligence Console
  • Monitoring day to day scheduled jobs on Broadcast Agent (BCA)
  • Attending regular ITs Calls for Business Objects
  • Troubleshooting if any issues with Reports (Technical support to Users)
  1. Weekly:

  • Regular BO repository Maintenance like Cleaning Orphaned Connection/Universe/Documents
  • Scan / Repair and Compact to all repository errors and Domain testing
  1. Fortnightly

  • Running Integrity Checks on all Universe (including Test once) to ensure through parsing (Syntax/Sematic Checking)
  • Discuss performance issues with DBA
  1. Monthly:

  • Monitoring BO Licenses and Transfer issues
  1. Occasionally:

  • New Universes development / modification
  • Data mapping to new tables in the universes (if and when required)
  • BO resource allocation like linking universe / Documents to Users and User Groups (if and when required)
  1. Rarely:

  • Installation and upgrading BO server with new version of software (Business Objects)
  • Meeting with BO users to discuss supporting issues
  • End User training

 Question and Answers - 

  1. What is node?
A node is a group of SAP Business Objects Business Intelligence platform servers that run on the same host and are managed by the same Server Intelligence Agent (SIA). All servers on a node run under the same user account.
One machine can contain many nodes, so you can run processes under different user accounts. One SIA manages and monitors all of the servers on a node, ensuring they operate properly
2. What are the components of SAP BO administration?
CMC – central management console
CCM – central configuration management
CMS – central management server [port number – 6400]
WAS – web application server [port number – 8080]
SIA – server intelligence agent [port number – 6410]
3. What is the default URL for SAP BO CMC & INFOVIEW?
In BO 3.1 PATH IS –
CMC Path: - http://hostname/cmcApp/login.jsp
Info view: - http://hostname/InfoViewApp/login.jsp
In BI 4.0 path is –
CMC Path: - http://servername/BOE/CMC/
Launch Pad: - http://servername/BOE/BI
4. What is the location of data folder in SAP BO?
C:/Program File/Business Objects/Business Objects Enterprise 12.0/
Data folder contain temp data
Logging folder contain log files
5. What are the components available in SAP BO 4.0 CMC?
User Attribute Management, Visual Difference, Auditing Monitoring Cryptography Key Promotion Management Version Management
6. What is Visual Difference?
Visual Difference enables you to view the differences between two versions of a supported file type (LCM BIAR) or a supported object type (LCM Job) or both. You can use this feature to determine the difference between files or objects to develop and maintain different report types. This feature gives a comparison status between the source and the destination versions.
For example, if a previous version of the user Report is accurate and the current version is inaccurate, you can compare and analyze the file to evaluate the exact issue.
7. Following are the three types of visual difference from which you can detect the file or an object:
  • Removed – In a report, if an element is missing in one of the file versions, the type of difference is shown as Removed. For example, the element could be a row, section instance, or even a block.
  • Modified – In a report, if there is a different value between the source version and the destination version, the type of difference is shown as Modified. For example, the value could be the cell content or the result of a local variable.
  • Inserted – In a report, if there is an element in the destination version but is not present in the source version, the type of difference is shown as Inserted.
8. What is auditing in BO 4.0 version? 
Auditing allows you to keep a record of significant events on servers and applications, which helps give you a picture of what information is being accessed, how it’s being accessed and changed, and who is performing these operations. This information is recorded in a database called the Auditing Data Store (ADS). Once the data is in the ADS, you can design custom reports to suit your needs. You can look for sample universes and reports on the SAP Developer Network.
For the purposes of this chapter, an auditor is a system responsible for recording or storing information on an event, and an auditee is any system responsible for performing an auditable event. There are some circumstances where a single system can perform both functions.
9. What is Monitoring in BO 4.0?
Monitoring allows you to capture the runtime and historical metrics of SAP Business Objects Business Intelligence platform servers, for reporting and notification. The monitoring application helps system administrators to identify if an application is functioning normally and if the response times are as expected.
Monitoring allows you to:
  1. Check the performance of each server
  2. Check system availability and response time
  3. View the entire BI platform deployment based on Sever Groups, Service Categories and Enterprise nodes in graphical and tabular format.
  4. Check health of server
10. Which services must be in running mode at the time of up gradation of SP AND FP in business intelligence?
CMC, INPUT FILE REPOSITORY, OUTPUT FILE REPOSITORY.
11. What is the default session time out of SAP BO web tier?
20 minutes.-
12.  What are tables available In CMS database in BO?
There are 6 tables on the database level to store the metadata.
  • CMS_VersionInfo
    The table contains the current version of BOE.
  • CMS_InfoObjects6
    This is the main table in the repository. Each row in this table stores a single InfoObject. The table contains the following columns: ObjectID, ParentID, TypeID, OwnerID, Version, LastModifyTime, ScheduleStatus, NextRunTime, CRC, Properties, SI_GUID, SI_CUID, SIRUID, SI_INSTANCE_OBJECT, SI_PLUGIN_OBJECT, SI_TABLE, SI_HIDDEN_OBJECT, SI_NAMEDUSER, SI_RECURRING, SI_RUNNABLE_OBJECT, SI_PSS_SERVICE_ID, ObjName_TR, SI_KEYWORD, SI_KEYWORD_TR, LOV_KEY.
  • CMS_Aliases6
    This table maps the user alias(es) to the corresponding user ID. A user has an alias for each security domain in which they are members. For example, a user may have both a Win NT alias and an LDAP alias. Regardless of the number of aliases a user may have, in the BI Platform each user has only one user ID. The map is stored in a separate table to enable fast logins.
  • CMS_IdNumbers6
    The CMS uses this table to generate unique Object IDs and Type IDs. It has only two rows: an Object ID row and a Type ID row. The CMSs in a cluster use this table when generating unique ID numbers.
    GUIDs, RUIDs and CUID are generated with an algorithm that does not use the database.
  • CMS_Relationships6
    Relationship tables are used to store the relations between InfoObjects. Each row in the table stores one edge in the relation. For example, the relation between a Web Intelligence document and a Universe would be stored in a row in the WebI – Universe Relation table. Each relationship table has these columns:
    Parent Object ID,Child Object ID,Relationship InfoObject ID ( this Default InfoObject “DFO” describes the properties of the link between the two objects.), member, version, ordinal, data. Relationship tables are defined by default objects.
  • CMS_LOCKS6
    This is a auxiliary table of CMS_RELATIONS6
13. How to enable tracing without restarting servers?
By modifying BO_trace.ini file.
14. What Admin activities have you done?
Till date in administration I have done the following work which are as follows –
creation of users and groups
granting rights and user administration
password recovery
scheduling, promotion management
up-gradation
playing with the servers [configuration]
creation of log files
15. Explain more about Server, capacity. Load etc.
Servers – it is a computer program or device that provides functionality for other programs or devices called as clients.
Ex- CMS (Central Management Server), APS (adaptive processing server), and ADJS (adaptive job processing server) etc.…
Capacity – capacity refers to the load the server can take. While generating the log files for checking any sort of error then we raise the capacity of the server as high to get all the details of the server.
Load – I could define load as the thing that is on server level. For example how much work is been done by APS in your system. So the number of work could be counted as load on APS.
16. How comfortable are you in UNIX/Solaris.Do you know Unix commands ?
I am not much comfortable with UNIX/ SOLARIS. I know few Unix commands but never used much of it. Few commands are as follows –
Is – list the files in current directory.
Cd – change directory to tempdir
Rmdir – remove directory
More – look at a file, one page at a time
grep <str><files> - Find which files contain a certain word
 chmod <opt> <file> - Change file permissions read only or Change file permissions to executable
17. Have you created User groups?
Creating a user group is a very common task in administration.
For this log on to CMC.
Then under organize section i.e. the first column under which there will be list of options and the 5th option will be users and group.
Click on it and all the users and groups you can view present there.
Now on the LHS top corner there will be an option for creating user and group.
Click on it and give the credentials that it ask such as name and the password and click ok
After this your user and groups both can be created.
Under a group you can keep as many as users you like to keep.
18. Have you created Custom access levels?
Yes I do have created custom access level in CMC. The steps are as follows
Open the CMC homepage and select access level in that.
A new window will get opened in which name of the access levels and the description will be given below.
Ex – view, view on demand, schedule etc..
Now go to top in the LHS and under manage there will be option to create new access level.
Once clicked it will re direct to new page of new access level in which you can give a name as owner view or something like that.
Once done with that now you can see 2 panes one one in LHS and other the RHS. In the LHS you can see properties, user security, included rights.
Click on included rights and on RHS pane now you could able to see view and schedule rights with the status marked as denied or granted or undefined.
You can modify them and save it as a new access level right.
19.What happens if Job Adaptive Server is down ?
If it is shut down then following things can’t able to get done which are as follows -
Business Process BI Service
Client Auditing Proxy Service (collects auditing information from connected
Rich Desktop and Web Intelligence Clients).
Publishing Post Processing Service (responsible for any post processing
of a publication job, including PDF merging and publication extension
processing).
Publishing Service (coordinates the publication of an object by
communicating with other services).
Search Service (processes search requests and executes the indexing
20. If User mistakenly deleted any of public or Personal folder/reports what’s your steps to recover?
In my working career as an administrator it happened few times that the user by mistake delete the public or personal folder then the steps that I carry out are as follows-
First of all I check that is there any back up of database along with the backup of file store. If then I suggest the following steps such as-
- Dump de CMS DB setup into a seperate shema
- copy the backup of the filestore to the file system of the server
- point the sia to the backup of the CMS DB backup shema
- point input and output frs to the backup directory of the frs backup
- via import wizard import the folder to a .biar file
- point the sia and frs back to its original location
- import the biar
Or I could suggest that in simple –
Go to application – recycle bin – properties – enable it by clicking on to and set the days like 180.
Then you can get the list of things those are deleted and are at recycle bin
Click on the desired ones and restore it back.
21. If user gets ‘Timed Out’ Error what will you do?
Changing the default session timeout value for the Java CMC
The default session timeout value is 20 minutes in the CMC. Use this
procedure if you want to modify the default session timeout value.
&#56256;&#56408; To change the sesion timeout value
1. Verify that the Java SDK is installed and its location is in your PATH environement variable.
If you are able to execucute the jar command, and receive usage information on the command, proceed to the next step. If you receive a error message, install the JAVA SDK and add is location to your PATH.
2. Stop the Web application server on the machine where webcompadapter.war is deployed.
3. Extract the web.xml file from the directory where webcompadapter.war is deployed.
jar -xvf webcompadapter.war WEB-INF/web.xml
4. Open web.xml in a text editor like Notepad and search for the following section:
<session-config> 
<session-timeout>20</session-timeout>
</session-config>
5. Change the value between <session-timeout> to the number of minutes
you require for the session to timeout.
6. Save web.xml.
7. Update the webcompadapter.war with the modified web.xml file. Use the
following command:
jar -uvf webcompadapter.war WEB-INF/web.xml
8. Restart you web application server and reploy webcompadapter.war.
22. What will you do if critical errors occurs?
If so happens then I discuss with my team and sometime I consult SAP support portal for this.
Have you migrated CMS from one database to another database ?
23. What is PID?
It referes to process id in cmc in SAP BI. SIA recognizes by operating system and has a process id which could be view if we go to task manager.
24. How will you resolve SMTP error ?
I could suggest to fix SMTP error by one of the following methods.
. By disable the antivirus.
. Changing the smtp port.
. repair and create new profile.
25. What is the procedure to move reports from DEV/QA/Prod
With the help of life cycle manager
26. What are Input file repository server and output file repository server?
The Input File Repository Server manages all of the report objects and program objects that have been published to the repository. It can store .RPT, .CAR, .EXE, .BAT, .JS, .XLS, .DOC, .PPT, .RTF, .TXT, .PDF, .WID files. In the case of .RPT files, they are stored as report definition files only which do not contain any data.The Report Properties page of the CMC shows you the location of the Input report files. The RPT report template can be found at frs://Input/a_084/004/000/1108/ca067d4f1710cbc.rpt
The Output File Repository Server manages all of the report instances (saved data copy of the report)generated by the Report Job Server or the Web Intelligence Report Server, and the program instances generated by the Program Job Server. It also manages instances generated by the Web Intelligence Report Server and the LOV Job Server. It can store the following files: .RPT, .CSV, .XLS, .DOC, .RTF, .TXT, .PDF, .WID. For .RPT and .WID files are stored as reports/documents with saved data.
Since Output FRS stores the report instances, deleting instances would remove instances not the actual reports. However the report structure will be stored in the Input FRS.
Using Query Builder we can find the location of the Output File repository files. The following query may be handy if you already know the report name.
SELECT SI_NAME, SI_KIND, SI_FILES, SI_INSTANCE from CI_INFOOBJECTS
Where SI_NAME =’xxxx’
If the SI_INSTANCE value is false then the InfoObject is the actual report and SI_PATH will be in Input FRS. If SI_INSTANCE is true then the InfoObject would be an Instance and the SI_PATH will be in Output FRS.
Limitations for File Repository Servers
  • The Input and Output File Repository Servers cannot share the same directories. This is because one of the File Repository Servers could then delete files and directories belonging to the other.
  • In larger deployments, there may be multiple Input and Output File Repository Servers, for redundancy. In this case, all Input File Repository Servers must share the same directory. Likewise, all Output File Repository Servers must share a directory.
27. What are the steps if user not able to login?
If a user is unable to log in then he have to reset the password. That’s what I found till date the log in issue with the users. Sometime it do happened that server is been crashed or shutdown then in that case user can’t able to log in.
As a administrator it also happened that I have reset the password then I followed the following steps which are as follows
  1. Open the Crystal Configuration Manager (CCM).
  2. Stop the Central Management Server (CMS).
  3. Open the CMS database administrator application (for example: SQL server enterprise manager if you are using MS SQL2000).
  4. Run the following SQL statement on the CMS database:
SELECT *
FROM CMS_InfoObjects5
WHERE (ObjectID = 12)
  1. Delete the record that is returned.
  2. Restart the CMS server from the CCM. This will recreate the administrator account with a blank password.
  3. Log on to the Central Management Console (CMC) using Administrator as a user name and a blank password.
  4. Now you can set the password for the Administrator account by navigating to Home -> Users -> Administrator. Once you type the password you can also check the option "Password never expires" and then click "Update"
28. What you will if schedule reports are failed?
Then in this case one have to re schedule it. Or follow the following steps.
  1. In the Central Management Console (CMC), go toEvents
  2. InSystem Events or a sub folder of System Events, create a new Event
  3. SelectSchedule as the Type, and give it the appropriate name and description. Set the Result as Success because we only want the event to trigger when the trigger report is successful. You can either leave the Alerting Enabled checked or unchecked, depending if you want to be notified each time the trigger report was executed successfully.
  4. And finally it will get done.
29. What is difference between Adaptive processing Server and Job processing server?
The basic difference is that, Job processing server do host of applications where as APS doesn't.
30.  What is the difference between View and View on demand?
The basic difference is that, in View on demand we can refresh the reports where as in View we can't perform refresh. We can only see the objects and do editing in the reports.

SAP LUMIRA (2.0) DISCOVERY

Today, i would like to share with you readers about the reporting tool or i can say, a tool via which we can create dashboards and we can able to represent our data in the graphical format and i.e. known as SAP LUMIRA.

In my last tutorial about lumira i have shared how it works and what are all the features of this tool along with some question and answers that can be benefited as interview point of view.

In this blog i would like to share with you about Lumira 2.0 version that is going to be released in the market by the beginning of winter and we can assume it as a "Gift"from SAP for this year.

L1 This would be the cover page for SAP Lumira 2.0 version.

The best part of this version is that in this version we would be having Design studio along with the Lumira tool , so we can say that it is been integrated into one tool. The next best part of this tool is that, it is GUI based and so less java or css coding would be required as that of required in Design Studio currently.
L3.PNG

Features which are present in Lumira 2.0 version- 

  • As compared to older versions of lumira, the new version have combined prepare, compose and visualize work flows in one canvas.
  • there has been improved interaction across the application.
  • BW and UNIX connections are now available by default post installation.
  • In terms of filters, in the new version we have visualization level filters in the canvas.
  • there is consolidated view of filters applied and now we can filter by multiple conditions.
  • Hierarchical  List of values display and navigation available.
  • we have additional control types along with extended filters to input control.
  • In context to visualization we have improved charts defaults.
  • extended scenarios for data discovery along with visualization.
  • Charts like pie, donut, line, stack, waterfall have been improved as compared to older versions.
  • There is been enhancement in data highlights formatting across chart types.
  • We have re usable conditions across the visualization.
  • As compared to older versions we have now drill support for level based hierarchies.
  • Improved interaction on cross tabs.
  • Enhanced default maps like ESRI and NavTeq
  • We can re use the components on the canvas.
  • We do have now SAP BW online access and analysis in new version.
  • We have now common server for design studio and lumira and design studio components have been merged into lumira.
  • GUI based and less use of coding.
SAP lumira 2.0 has the ability to combine with with the following with -

DATA DISCOVERY AND APPLICATION -

SAP Lumira
SAP Business Objects Explorer
SAP Business Objects Analysis for OLAP (Online Analytical Processing)
SAP Business Objects Design Studio
SAP BEX Web Application Designer
SAP Business Objects Dashboards

OFFICE INTEGRATION -

SAP Business Objects Analysis Office
Live Office
EPM Add-In
SAP BEX Analyzer 

REPORTING - 

SAP Crystal Reports
SAP Business Objects Web Intelligence
SAP Business Objects Set Analysis
Desktop Intelligence

BUSINESS WAREHOUSE - LIVE ANALYSIS

1
BEx Query Elements -
  • Support for Display and Navigational attributes for analysis
  • Support for Hierarchical Custom Key Figure and Characteristic Structures
  • Selection of elements of a custom structure
BEx Query Hierarchies -
  • Support for Hierarchy with Linked nodes, measure structure (in Cross tab),
  • Respects Hierarchy level settings, Expand to Level setting on Query
  • Expand to Level, Expand and Collapse All on client
  • Compact Axis in Rows and Columns
  • Switch Hierarchy on client
BEx Query Display -
Display settings applied in Query while analyzing such as - 
  • Scaling Factor,
  • Reverse Sign,
  • Number of Decimals,
  • Hide/Show Element,
  • Sort on Characteristics,
  • Key & Text Display combinations for characteristics,
  • Show or Suppress Result rows,
  • Placement of result rows
BEx QUERY VARIABLES - 2
  • Hierarchy Variables
  • Variables with Query result replacement
  • Variable Representation with Interval along with Range
  • Contains operator for Selection Option Variable
  • Support for Variables in Default Area of BEx Query
  • Resolves Hierarchy & Hierarchy Node variables dependencies
  • Support for Cascading variables for compound characteristic
  • Merged Variable Support
BEx Query Functionality - 
  • Support for Conditions applied on Rows / Columns in BEx queries
  • Support for Conditions applied on Independent Characteristics in BEx queries
  • Support for Local and Cell level calculations, respecting the context of calculation in Query
  • Zero suppression for columns and rows trough out the analysis with different object selections

HANA Live Analysis - 

Enhancement done with HANA Live Office Analysis -
3.jpg
  • Support for Parent Child and Level based hierarchies
  • Drill options on hierarchies such as Drill Up, Drill Down, Drill By, Drill Path(new).
  • Supports merged variables.
  • Operators for selection option variables.

IMPROVED UX - 

4.jpg
Enhanced home screen - 
  • All Starting points Sources, Connections, Local & Platform Documents in one Screen.
  • BW and UNX connections available by default post installation.
  • Retain session to BIP across workflows i.e. Login, Connection, Save and UNX access.
Application Enhancement - 
  • Prepare, Visualize and put a story together in one screen, combine any of actions in the process of Lumira 2.0 version
  • Contextual actions and right click enabled for all components and workflows.
Input Control Enhancements -
  • Drag and Drop filters to create controls.
  • Improved control look and Feel, control, text and padding of visual enhancements.
5.jpg
Visualization improvement - 
  • Avoid re-work on formatting of charts while updating chart for additional details with one canvas approach.
  • Formatting capabilities under Design Tab, your work is retained irrespective of where you do formatting.
Story and Layout enhancements -
  • Tile based approach and intelligent placing to reduce users effort in alignment.
  • Insert visualization at particular location with right click on the canvas.

FILTERS IN LUMIRA 2.0 - 

6.jpg
  • Create Visualization Level filters on the canvas with contextual menu.
  • Consolidated view of applied filters on Filter bar at any level of application i.e. Story, Page and Visualization
  • Support for multiple conditions in a Filter with AND or OR conditions depending on operator combinations
  • Hierarchical display and navigation for List Of Values from BW hierarchies

INPUT CONTROL  ENHANCEMENTS - 

7.jpg
  • It now support for additional control types like -
  1. date picker for objects like DATE.
  2. Slider for numeric dimensions.
  3. Text box to type in values.
  • Extended Scopes -
  1. Interact with Page filters via controls
  2. Interact with the Visualization filters via controls
  • Conditions -  Additional Operator support at Parity with Filters
  • Hierarchies - List of values display in control.

VISUALIZATION ENHANCEMENTS IN LUMIRA 2.0 - 

8
  • Subtle grid lines by default for better reading
  • Hide overlapping labels by default
  • Improved font styles for axis, legend and data labels
  • Better labels and no default legend for Pie charts
  • Improved font styles for Numeric Points
  • Convert a bar chart to Dual Axis chart
  • Convert a bar chart to Combined chart
Enhancements on charts that can be seen are Waterfall, Stacked Column Chart, Pie and Donut Charts respectively.
9aWATER FALL CHART -
  • Increasing and Decreasing color customization for waterfall chart
  • Option to connect the bars with line
  • Support for total and sub-total bars
  • More properties available for Plot Area, Axes, Title
  • Extended scenarios for data discovery with visualizations
STACKED COLUMN CHART -
9b
  • Options to show labels as Value, Percentage or both
  • Option to show total values
PIE & DONUT CHART -
9c
  • Options to show labels as value, percentage, category and category-percentage
  • Display of labels for Pie and Donut charts outside the Pie by default
LINE CHART -
10a
  • Enhancements on the lines, i.e. smooth lines can be seen as compared to older ones.
  • Customization for the axes i.e. x and y axis.
  • Option to hide overlapping data labels in charts
10B
  • Improved set of color palettes along with increase with numbers i.e. 9 to 12.
  • Customization for chart borders for chart area and plot area.
CONDITIONAL FORMATTING - 
1
1a.jpg
  • Create conditions on measure or dimension values.
  • We can re use the conditions in all visualization.
CROSS TABS - 
2.jpg
  • Add / Replace column with Drag and Drop in cross tabs.
  • Drills options are been enhanced on custom basis like Drill up, drill down, drill by, drill path.
  • Assigning scaling factors for measures.
  • Displaying and Cells Scaling Factor and Units on header.
  • Swap axis between rows and columns
  • Overall results respecting aggregation of measure in single row
GEO MAPS - 
3.jpg
  • Additional Data point representation for NavTeQ
  • Multiple Layers with markers from different Data sets
  • Provide Titles for Geo Maps and Layers
  • Switch default Maps between ESRI and NavTeQ.
STORY AND LAY OUT - 
4a
4.jpg
  • Precise resize and movements of components on canvas
  • Choose from multiple viewing modes i.e. Fit to Content, Width, height or Actual as per your preference
  • Reuse Story, Page, Visualization and Controls by duplicating them
  • Overlap any object on the canvas (last added object will appear front)
  • Opacity for background color and image can be changed
  • Along with Filters bookmarks now capture, sorting, ranking and custom calculations on visualization
  • Additional aspect Ratios for better rendering on Launchpad or define your own aspect ratio for target device.

COMPOSITES - 

  • Creation of own components is now possible in lumira 2.0 version. With the basic knowledge of Java script it can be achieved.
  • Re-usable across applications, stored on the BI platform
  • Can be used to define global components (header, footer, toolbar, global script pool)
  • Can also be used to decompose complex applications into smaller, better manageable parts

ENHANCED BOOKMARKS - 

  • Designer configures what is captured in a bookmark (e.g. selection of data sources, global script variables, components)
  • Increasing robustness of bookmarks against application UI changes
  • Promotion Management support of bookmarks and the folders
  • Multiple Bookmarks can be defined within one application
  • Bookmark configuration as a technical component in design time.
  • Personal and global bookmarks are supported

ENHANCEMENTS ON CHARTS-

  • Context menu at run time to change title, change legend.
  • Chart configuration redefined such as:
  1. new user interface.
  2. new properties like - EXTENDED LABEL CAPACITY.
  3. Selection of chart types and set properties in one place.
  4. Contextually display properties according to selected elements.
  • Charts are been more simplified then older versions in terms of
  1. Reduction in number of chart types without effecting their features.
  2. Easy to define dual and combination charts.
  • Improvement done on chart feeding panel.
  • Chart property “Allow Data Source Modification” to enable add/delete dimensions/KPIs to chart
  • All configuration options moved to property sheet at design time

SAP Lumira:

To understand the concept of SAP Lumira or i can say that to get a overview of lumira before starting this blog i want to give an illustration so that all of who view this can easily understand the concept and followed by it you can get an interest for working in Lumira.

Every one of us like to eat food as it gives us energy and few of us even have a hobby or passion to cook and call some one and offer them the food that they have prepared. 
So now i want to relate this Food preparation with SAP Lumira tool.

While we enter the kitchen premises inside our house, our first intention is to "PREPARE" the food. So while preparing food, we think of preparing 3 to 4 items or even more or less.

Now next is that after preparing the food, our mindset say us to taste it and even wee "VISUALIZE" the look of the prepared food. So, we taste and then see what all things are missing while in the preparation process and thus ass them so it become yummy and tasty.

After this we take a plate and put the prepare food over the plate and "COMPOSE" it as after tasting the food our stomach feels hunger and so now we want to eat it. Now if there is some of guest who has arrived at your place then we try to compose the food on the plate in a manner that it looks beautiful and we expect that the guest like it just by the view of the composed food over the plate.

Now finally, we get out from the kitchen premises and come over to the dining table and "SHARE" the food that we have prepared and we start to eat and enjoy the food.

Similarly in lumira the concept is same like the food only, first Prepare, then Visualize, after that compose and finally share it.

Till now i expect, that basic concept or over view of Lumira is clear in your mind, and how it works.


LUMIRA is a visualization tool that is of SAP. Like other visualization tool that we do use for creation of reports we do use Lumira for better visualization. Lumira is a visual intelligence tool that is used to visualize data and create stories to provide graphical details of data.

i would like to share my point of view on Lumira so that who are working on this tool can get benefited and could help in working as well as cracking interviews.
There are 4 basic key terms that we use and see in Lumira which are detail, measure, hierarchies, and custom calculations. We can say in custom calculation such as we have a salary column in our data set and we would like to add one more column like bonus so for this we can use calculations on salary column to obtain values for bonus.
There are 4 tabs in Lumira and these are - PREPARE -VISUALIZE - COMPOSE - SHARE.
PREPARE – in this tab we fetch the data from the data source and then it is sectioned into fact and dimension objects for the report. We can even add new custom calculations here.
Here in the prepare tab we can do different types of formatting with the data set that is been acquired such as data cleaning, create new measures, create formulas and addition of new dataset.
There are different panel under prepare tab which are –
Dimension and measure – it contain all the measures and dimension in the data.
Dataset selector – we can select between multiple dataset or we can also have new dataset.
Filter bar – this represent filters applied to any dimension in dataset. To add a filter click on the icon in front of dataset and click on filter.
We can add new calculated measures in the prepare tab and the steps are as follows –
  • First go to prepare tab of Lumira. Then on the RHS side top 4th option shows calculated measures which is after the show/hide columns.
  • Click on it and we will get 2 options from which we cans elect any one of it at a time i.e. new calculated dimension and new calculated measure.
  • We select the option new calculated measure and then enter the measure name followed by it enter the formula and if required enter the function and click ok.
  • Then the new calculated measure will get added under the measure tab.
VISUALIZE – this tab is used to add graphs and charts on the data that has been imported and organized in prepare tab. We can add different dimension and fact to the label axis.
Main area under visualize tabs are – graphs, dimension and measure. X axis represent here measure and Y axis dimensions.
CHART CANVAS – this is used to create or visualize. We can directly drag the dimension and measures to chart to build a chart. We can add various tools like sorted by dimension, add or edit ranking by measures, clear chart and fit chart to frame. Apart from that we can re prompt, refresh, undo and redo.
For using the visualization tools we can go to file à preference à chart à chart canvas lay out.
A new window get open which is of Lumira preference. In this on LHS side we have tabs such as –
General - under general we have language option, auto recovery, font for the report, and default room.
View – under view option we have layout on RHS.
Chart - this is the third option at LHS and under this on RHS we have option like chart canvas lay out which suggest that position of chart builder at left or right side. Then comes the chart style from where we cans elect the color palette, template and zoom option and click then done.
Below to chart option in LHS we have other option like data sets, network and SQL drivers.
In the visualize tab we have chart picker option from where can select our desired chart.
Chart shelves – from here we can add measures and dimension to visualization.
COMPOSE – in this tab we can create stories and presentation which include back ground color, title, pictures and text.
SHARE – it is used to share the visualization that we have prepared and composed to different platforms.
Data sources – in Lumira that I have used so far I have used data source as excel, universe and UDT.
For this steps are –
  • Open Lumira à for to file option à click new or directly press (ctrl+N).
  • Then the new window get open which will be of “add new data source” from which we can select our desired dataset.
  • Select the data source and click next.
Like in my previous project, I have fetched the data from the excel as it was my data source so for that,
  • I went to file option à clicked new à then selected my excel file and à then created the report in the prepare tab.
  • After selecting the excel file I sometime went to advance option to select a custom range, and we can add hidden rows and columns, and after that clicked create button which take me to prepare tab.
  • In the prepare tab we could see measure and dimension both separated in to 2 section as measure and another as dimension. On the RHS of it we can view the data that was present on the excel file.
  • Here we do have option of filtering of data, display formatting, convert to text, create dimension to measure, create hierarchy, rename the tables, duplicate, merging of columns, hide from dataset panel, and we can create calculated dimensions.
  • After doing all these we move to next tab that is visualize tab where we can see 3 columns. First column we cans see objects which are classified into dimension and measures. 2nd column consist of types of graphs that we can make and along with that we have x and y axis which define dimension and measure objects that has to be added to visualize in graphical form.
After done with visualization tab next we move to compose tab where first we have to provide a title to the story and below to it we have 3 options which are INFOGRAPHIC, BOARD, and REPORT.
INFOGRAPHIC – used to convey narrative message in an improved way which include chat visual properties, shapes, images and text.
BOARD – useful for delivering aggregated information which include interactive charts, filtering and input control.
REPORT – different types of visualizations can be consumed which includes interactive charts, sections, input control.
From these 3 we can compose our report in Lumira and after that we can share.
In share we have 4 options which are export as file, publish to SAP Lumira cloud, publish to Lumira server and publish to SAP stream work.
Connection with Universe as data source –
  • Open Lumira - go to file - new - then select the option of universe to connect and down load the data set.
  • Then click next.
  • Enter the credentials details of universe which are host name, user name and password and the authentication type and click connect.
  • It then show all the universes that are created. Select the universe that is required and then click next.
  • Then a new window get open where we can add objects, filters which can be applied on the data set. Then click next button and it will take us to prepare tab of Lumira.
  • In the prepare tab we can see the measures and dimensions which are defined at the universe level.
  • After that we can follow steps same as for excel and create the report using Lumira.

CREATION OF CHARTS IN LUMIRA –

Following are the steps to add chart –
  • Under the visualize tab go to chart builder.
  • Then select the chart type. By default it is bar chart.
  • Then choose a measure and drag it to an axis in the chart canvas. We can click on + option to add dimension and measures if required.
  • Then we can filter the data by adding filters to chart by click on the filter option on the top LHS.
  • Select the dimension and click ok after applying the filter.
Types of chart in Lumira –
COMPARISONCompare the difference between the values.BARCOLUMNRADARAREAHEAT MAP
PERCENTAGEShow % of parts in a chartPIETREEFUNNELDONUT
CORRELATIONShows relationship between different valuesSCATTER PLOTBUBBLENETWORKNUMERIC POINTTREE
TRENDData pattern or possible patternsLINEWATER FALLBOX PLOTPARALLEL COORDINATE
GEOGRAPHICRepresent map of a countryGEO MAPGEO PIEGEO BUBBLEGEO CHLOROPLETH

CONDITIONAL FORMATTING –

It is used to mention the critical data points in a chart by different values meeting certain conditions. This rule can be applied on measures and dimension both.
In the above picture the yellow colored box represent where we can use conditional formatting.
Creation steps –
  • First of all we must have a measure value added to a chart.
  • Click the new conditional formatting icon and it will open the rule editor box. Enter the name of the rule editor.
  • Then select based on list to add a measure or dimension. We cans set multiple condition formatting rules on a single measure or dimension.
  • Then select an operator and add one or more values for condition.
Geography hierarchy –
  • After getting the data we have to check the dimension containing the location.
  • Click the object that contain the dimension of the location then click geographic hierarchy à by names
  • A new window will get open with the geographic data then select the dimension to map to hierarchy. Click confirm.
  • If region doesn’t apply to data set then click none from the list.
  • After this we can see the set of analyzed values solved or not found. For all the solved values it will create a hierarchy and click done.

In this we can view 3 colors which are –

Location mapped exactly – GREEN
LOCATION WITH MORE THEN ONE POSSIBLE MATCH - YELLOW. (MORE THEN ONE CITY LIKE PUNE IN MAHARASHTRA STATE.)
LOCATIONS WHICH ARE NOT FOUND – RED.
  • To use this chart, we can select geo bubble chart and then add country to geography and measure to the chart.
  • Then select the value in the chart and we will get the drill option to the next level.
  • Ex – country – region – state –

CUSTOM HIERARCHY –

We can create custom ones by combining dimensions.
Ex – suppose we have customer name country, quantity, category, product tables in the dataset then
  • Select category à option à create custom hierarchy.
  • New window will open and there enter the name of hierarchy and select other dimension to add to the next level. Click create.
  • After this product hierarchy will be added under the dimension tab.
  • Add a bar chart and then add category and quantity to measures. Once we click category option we will get an option to drill down to next level i.e. PRODUCT.

MERGING –

We can merge 2 data set using join operator.
  • Go to data pane on the top and click on it à then click combine à
  • Once we click on merge it will show us a new window and compactable data type. Select merge type and click merge.
  • Then we cans select inner join or left outer join and merge the objects.
Appending dataset using union operator.
Data - combine - append.
  • A new window will open with append data. To use it we must have both tables must contain same number of column and compactable data types. Only compactable data types can be append.
  • If the selected dimension contain compactable data type then the dimension can be appended. If not then a message will appear as UNION CAN NOT HAPPEN.

I would like to add few screen shorts for Lumira. These are just the examples to get an overview

visualization of report in lumira.PNG
lumira heat map.PNG
lumira1.PNG
3d-lumira.PNG

 QUESTION AND ANSWERS ON LUMIRA -

1.Why do we use Lumira tool?

It allows you to predict future outcomes and forecast as per changing market situations. You can create data visualizations and stories from multiple data sources. It helps you to adapt data to organizational needs to create stories with visualizations. We can share the visualizations on different platforms like SAP HANA, BO Explorer, Business Objects BI Platform, etc.

2.Why do we use custom calculations in data set?

You can create custom calculations in Lumira data Visualization which are not available in data set or at database level.

Example

You have a “Salary” column in the data set, you can add a new calculated column with name “Bonus” and can apply a calculation on Salary to get the value of this column.

3.What is the use of different tabs in Lumira?

Prepare

This is used to import data set in SAP Lumira. Data cleansing is done and converted into the appropriate measures or attributes for the reports.

You can add new custom calculations here.

Visualize

This tab is used to add graphs and charts on the data that has been imported and organized in Prepare tab. You can add different attributes and measures to Label axis.

Compose

This is used to create stories and presentation, including background colors, titles, pictures, and text.

Share

This tab is used to publish your visualizations to different platforms or with different set of users in BI Repository.

4.Can we add hidden columns in Lumira?

Yes, on Advance option to select custom range. You can also include hidden rows and columns.

5.When we acquire a dataset it comes under which tab to perform data cleaning and editing?

Prepare tab.

6.How can we acquire data from an info provider in Lumira dataset?

Add new dataset → connect to SAP Business Warehouse

7.When you use query with SQL option to acquire data set, you can see few of the connections in red and green. What is the meaning?

To use SQL query to create dataset, go to file → New

Click on Query with SQL option to download a data set and click on Next. JDBC drivers has to be installed for database middle ware for using SQL query. The access driver is .jar files you can download from vendor site and copy to driver folder in application path.

Select SQL query, all queries in green presents drivers are installed properly for middleware. Select database middleware for target database and click on Next

8.What are the different connection parameters that can be configured while using Query with SQL as data source?

Connection Pool Mode: To keep connection active

Pool timeout: Time duration to keep connection active in minutes.

Array Fetch Size: to determine number of rows to fetch from target database.

Array Bind Size: Larger bind array, more number of rows will be fetched.

Login Timeout: Time before a connection attempts a timeout.

JDBC Driver Properties

9.What are the different panels in Prepare tab for data cleansing and applying filters?

Dimension and Measure Panel

It contains list of all dimensions and measures acquired in data set. Number in front of each object represents its data type.

You can use different tools in this panel to edit the data objects and to add hierarchies.

Dataset Selector

You can select between multiple datasets or you can also acquire a new dataset using this option.

Filter Bar

This represents filter applied to any dimension in dataset. To add a filter click on the icon in front of dataset and click on Filter.

10.What is chart canvas?

This is used to create or modify a visualization. You can directly drag attributes and measures to chart canvas or can add to chart builder.

You can add various tools like

  • Sorted by Dimensions

  • Add or Edit a ranking by measures

  • Clear Chart

  • Fit chart to frame

  • Reprompt

  • Refresh

  • Settings

  • Maximize

  • Undo

  • Redo

11. What are the different chart canvas properties that you can set in Lumira?

  • Chart Canvas Layout

  • Chart Style

  • Template, font zoom etc.

12. Where do you define properties for Chart Canvas?

Go to File → Preferences → Charts

Here you can define various properties for chart canvas.

13. What is the use of compose tab?

You can create different stories in SAP Lumira in presentation style document using visualization, graphics and other customization that has been applied to dataset.

Once you go to compose tab you get multiple options to select an Infographic, Board or a Report

14. What are the different chart types in SAP Lumira?

  • Bar Chart

  • Column Chart

  • Radar Chart

  • Pie Chart

  • Donut Chart

  • Tree

  • Scatter Plot

  • Bubble Chart

  • Network Chart

  • And many more

15. Which chart types are best suited to show correlation between different values?

  • Scatter Plot

  • Bubble Chart

  • Network Chart

  • Numeric Point

  • Tree

16. What is the use of Geography charts in SAP Lumira? What are different chart types under this category?

It is used to present map of country or globe present in the analysis. Common chart types are −

  • Geo Bubble Chart

  • Geo Choropleth Chart

  • Geo Pie Chart

  • Geo Map

 17. What is Conditional formatting?

It is used to mention critical data points in a chart by different values meeting certain condition. Multiple conditional formatting rules can be applied on measures or dimensions.

18. Which chart types support conditional formatting in SAP Lumira?

Conditional formatting can be applied on −

  • Bar and Column charts (except 3D column charts)

  • Pie chart

  • Donut chart

  • Scatter chart

  • Bubble chart

  • Cross tab

19. What is the use of display formatting option in Prepare tab?

You can set the below formatting for an attribute or dimension −

  • Select a Value format

  • Choose a Display format

  • Prefix or suffix

20. Can we convert data types in SAP Lumira for dataset?

You can also convert data type into another. In Prepare tab → Go to column heading → Options

21. What is the use of hierarchies?

Hierarchies are used to display data at different granularity level and you can drill up/down at different levels for better understanding of relationship between objects.

22. What is geography hierarchy? How we can create geography hierarchy in data set?

When data is acquired, application looks for dimension containing location and present with an icon.

Click on option icon in front of a dimension → Create a Geographic hierarchy → By Names (this option is available only for string dimensions).

New window will open with name Geographical data → select the dimension to map to hierarchy and click on Confirm.

23. What is the meaning of Green, yellow and red while acquiring data for a hierarchy?

Locations mapped exactly are marked with green.

Locations with more than one possible match (for example, if more than one city named London was found) are marked with yellow.

Locations not found in the geographic database are marked with Red.

24. How do use hierarchies in the charts added to chart canvas in SAP Lumira?

When hierarchies are defined on dataset, you can use drill up or drill down option to move to next level.

25. How do you create custom hierarchies in SAP Lumira? How do you define level in a custom hierarchy?

Let us say you want to create a custom hierarchy on Category → Product

Select Category → Options → Create a Custom hierarchy

New window will open. Enter the name of hierarchy and select the other dimensions to add to next levels and click on Create. Arrows can be used to change the level.

26. You have added multiple data sets in SAP Lumira. How do you perform merging of data sets?

You can also merge two dataset by using Join operator.

Go to Data pane at the top → Combine → Merge

27. To perform a merge on multiple datasets, what are the conditions to perform a merge?

To merge

They should have same key column.

Column with same data type can be merged.

All columns will be merged.

28. What is the use of Merge type option in SAP Lumira?

Merge type defines the type of join. You can select from different join types – Inner join, outer join, etc

29. What is the different between combine and Append datasets in Lumira?

Merge is used to apply different joins on datasets.

You can use Union operator to append two datasets.

30. How do you append data sets in Lumira?

To append datasets in Lumira, go to Data → Combine → Append

To use append, both tables should contain same number of columns and compatible data types. Only compatible data types can be appended.

31. You have acquired data sets with different number of columns and different data types, when you perform append on these data sets what will happen?

When you perform an append and both source and target dimensions are different, message appears-Union cannot happen.

Business Objects: Common Differences

Today i would like to share few of the differences that are generally asked during interviews and apart from that it is also good to know them as it could make your concept more clear in this domain when you are working.

Differences between SAP and BO
SNOSAP              BO
1.Characteristic, Key Figure, Display attribute        Dimension, Measure, Detail
2.Characteristic Catalogue          Dimension Class
3.Key Figure Catalogue        Key Figure Class
4.Restrictions or Filters        Global Static Condition
5.Variables        Dynamic Prompt
6.Calculated Key Figure        Measures with Calculations
7.Restricted Key Figure        Measure with Dimension Restriction
8.Attribute Level Hierarchies        Default Hierarchy
9.Node Level Hierarchies        Custom Hierarchy
10.Exceptions        Alerts
11.Conditions         Report Level Restrictions
Differences between WEBI and DESKI
SNOWEBI(Info View)DESKI
1.It is a thin client/ half clientIt is a thick client/full client
2.Info View supports only for the universes which are exported to the server( does not support for offline mode)DeskI supports for the universes which are in the Local as well as which are present in the server.
3.Webi  supports personal data filesDeskI also supports personal data files.
4.Extension for Webi is (.WID)Extension for DeskI is (.REP)
5.The report created in the Webi cannot be saved in the Local systemThe report created in the DeskI can be saved in the Local system
6.We cannot hide the column in the report created in WebiWe can hide the column in the report
7.Partially supports for Scope of AnalysisFully supports for Scope of Analysis
8.Slice and Dice is not possible is WebISlice and Dice is possible in DeskI
9.Creation, modification, distribution, scheduling through web is possible.Creation and modification from DeskI tool. Distribution and scheduling from Info view.
10.Drill across is not possible in WebiDrill across is possible in DeskI

Difference between WEBI and Crystal Reports.
SNO        WEBICRYSTAL REPORTS
1.No need for any software installation in the Developer systemSoftware installation of client version of Crystal reports is a must in the Developer system
2.Need not install  to install any integration kit to publish reports into the BO server.Integration kit is mandatory to publish reports into the BO server.
3.Webi supports only for Universe and personal files.Supports all the data sources.
4.Webi supports for ADHOC featuresCrystal does not support ADHOC features
5.Webi supports for dynamic aggregationDoes not support dynamic aggregation
6.Dynamic drilling is possibleDynamic drilling is not possible
7.Creation, modification, scheduling, and distribution from web portal (i.e Info view) is possible.Distribution, scheduling is possible from web portal. But creation and modification in Crystal reports client.
8.Reports can be exported in PDF and Excel formatsReports can be exported in Excel, PDF, Word, .CSV, Text, XML, Etc.
9.Supports up to 90,000 records in the reportThere is no specific limit for the number of records in the report
Differences between WEBI and XCELSIUS
SNOWEBIXCELCIUS
1.Supports only for Universe and personal data filesSupports for Universe, Webi, Crystal, XML, BEx
2.Developer need not install any softwareMandatory installation of client version in the Local system
3.Does not support for interaction featuresSupports for interaction features
4.Supports for very few chartsHas many charts and components
5.Supports for up to 90,000 detailed recordsSupports only 512 summarized records
6.Supports for ADHOC features and allows us to modify through webDoes not support for any ADHOC features
7.Creation, modification, scheduling, distribution from web portal (i.e. Info view) is possibleCreation and modification from Xcelcius. Distribution from Info view

SAP BO: Universe designer: IDT (Information Design Tool)

Today i would like to discuss on the topic which is of business objective or we say business intelligence now a days which is IDT. Every o...