SAP HANA AN INTRODUCTION PDF

adminComment(0)

With cutting-edge coverage of SAP HANA smart data access, SAP HANA Vora, and more, PDF (41 MB), EPUB (55 MB), and MOBI ( MB) file for download, . data platform is the SAP HANA database, which is fundamentally different from any other database engine in the 1. PART 1: SAP HANA – INTRODUCTION. nology was referred to as the XS engine, or simply as SAP HANA XS; Installation and Update Guide, available for download in PDF format.


Sap Hana An Introduction Pdf

Author:FUMIKO MANLANGIT
Language:English, French, Hindi
Country:Equatorial Guinea
Genre:Academic & Education
Pages:648
Published (Last):02.10.2016
ISBN:288-9-66987-198-4
ePub File Size:20.34 MB
PDF File Size:17.66 MB
Distribution:Free* [*Sign up for free]
Downloads:45440
Uploaded by: ARDATH

For Any SAP / IBM / Oracle - Materials download Visit: fatyfivythe.tk OR Contact Via Email Directly At: [email protected] SAP HANA. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP SE or an SAP affiliate. This guide is organized as follows: ○ Introduction and overview. ○ SAP HANA architecture. Describes the basic capabilities and architecture.

But what does View mean to a non-technical person? In the language of relational databases, a View is a virtual table i. HANA Practice test questions. Generally speaking, sorting is not fixed in calculation views. When there is need of More Fact table in information view then calculation view come in the picture. We can add multiple Hana instances in Hana studio. I hope that, after you read this paper, you will have a better understanding of what the model data can do for you, with the aid of the three view types in SAP HANA.

SQL is a database query language which you use to interrogate databases with. Course announcements. Hi, Did you manage to resolve this? Please advise.

Please read our previous tutorial on how to create Attribute View. Analytic Privilege Qs Which of the following automatically aggregates data? Join D. You will now create a Calculation View.

Please make sure you have gone through the tutorials of Attribute View and Analytic View as they are important for your understanding of Calculation View. But how can I open an oder Version? Your calculation view should have all the fields required and all the business logic needed.

Hi everyone and welcome to the most important tutorial in this series where we learn how to create a graphical calculation view. The data foundation Calculation Views are used to consume other Analytic, Attribute and other Calculation views and base column tables.

Table E. A calculation view can perform complex calculations joining different tables, standard views and even other calculation views as sources.

Understanding how each of these different views provides additional value on the raw data in the database will help you figure out how to model your data for maximum flexibility and performance. Can We use any of the above type view to create our apps? Unlike Viz charts, Vizframe charts are a bit different. Follow the rules of the corresponding control to format the data appropriately. Hi, I am new to UI5 I am able to filter sap.

Sapui5 filter content of ComboBox. Sort by Search Filter Reset. In this case I hard coded the datetime and its working fine, but I want to retrieve the date dynamically.

Thanks for all replyand i'll definitely vote for you. This will enrich your technical skills to a new level which is the current market trend.

When we develop Fiori application other SAPUI5 development also similar , we meet following challenges: We need write View using XML format but without good tool, it is boring and easy make mistake such as tag not matching… SAPUI5 have so many properties, it is hard to remember all of them.

PDF Download SAP HANA An Introduction 3rd Edition Download Full Ebook

Set properties of the controls. With ui5 and fiori knowledge you will be able to develop cloud applications for desktop and mobile devices. Hi, in this tutorial, we will learn how to design viz frame charts. The OData protocol exposes a uniform service interface to operate on collections of structured and unstructured data.

Use this control if the user needs to enter a single date or a date range. If you dont know how to refer my previous tutorial. Selenium Webdriver allows us to access dynamic web tables by their X-path The article is contributed by Kanchan Kulkarni. But when i am exporting it is directly showing to excel page, actually i want it to first it should show a dialog box with "Open","Save" and "Cancel" option. I tried to create by using the Javascript View But no Luck. As a beginner in Android, you have been a great help to me.

OData Version 4. GitHub Gist: instantly share code, notes, and snippets. Click on the Icon Tab Filter, then drag and drop it next to the Supplier tab. Here is the sample json having three rows. Follow RSS feed Like.

It consists of two parts: the date input field and the date picker. Ok so the suspense is over. If I filter on 1 parameter it doesn't bring back any results because I didn't filter the However, if the assertion is sent to the SP via the POST binding frequently used with 3rd party SP servers , the assertion itself will be transmitted vai the browser.

Conclusion OpenUI5 lets you build enterprise-ready web applications, responsive to all devices, running on almost any browser of your choice. The data is stored in json format in the manifest. Ohh sure. SAP Fiori. This my home. Using selenium web driver, we can handle dynamic web tables easily. The ultimate fate of on OData Service is to get consumed by the front-end applications. I tried the following, but it didnt work Hey guys, I am using a XML View and want to add a filter for the current date to control sap.

The value is connected with my JSON model using bindings to the color property. Please open your WebIDE and create a new project from template. If the model is available, the binding info is used to create a binding and it is also stored as a property of the filter; if the model is not yet available, the managed object's modelContextChange event is used to try again later.

The filters are applied to the ListBinding and there is no official API to access to current filter objects. It also has more user-friendly UI and detail description for the test suite result. Dynamic web tables are inconsistent i. So , when we are creating a control , we are supposed to extend the Control base component. This applies when the contextual filter is used to prefilter the items listed in a select dialog.

I am planing to There is one special case where the contextual filter can actually be cancelled by the user. The SAP Web IDE offers an unparalleled flexibility to build and run innovative solutions with a wide arrange of capabilities and features while covering the entire spectrum of performance improvements and enhancement process.

Hi there,I got a question regarding on how to sort a Multi Combobox, since I start to run out of ideas. I give it a key and a value. In this reading sample, we'll discuss some of the most widely used SAPUI5 application patterns and their attributes. Table to adding some Configuration. Make sure the sap. The information is most often shown as a tooltip text when the mouse moves over the element.

Creating a Binding. I however want to declare bindings declaratively based on binding context paths from Note: The intent of the following tutorials is not to focus on SAPUI5 but to use it as mean to execute the SAP Predictive services. If HTTP header plugins are available for the browser, one can view the assertion details using the following approach However, if the assertion is sent to the SP via the POST binding frequently used with 3rd party SP servers , the assertion itself will be transmitted vai the browser.

This is a custom map application not google maps and the URL needs some dynamic parameter and I got stuck in with an issue in binding filter values to items attribute of sap. All of this creates unprecedented opportunities for all organizations to grow their businesses by exploiting the connectivity of consumers and business partners, tapping into the depth and variety of new types of data, acquiring this data in real time for real-time decision making, and developing innovative new applications quickly.

Consumerization is driving expectations of what business IT should offer for its users. As users become familiar with smart consumer applications they also demand real time applications, and new, innovative applications that enable deep insight and provide proactive decision support in their jobs. We cannot just keep adding more complexity to existing IT landscapes in the hope we can keep pace with trends.

What is needed is a fresh start, time to start with a blank canvas and rebuild the business systems from the bottom up using only the latest technologies aligned to the modern digital world. The Problem with Current Landscapes Typical IT landscapes have developed over time into multiple complex arrangements of downloadd, acquired with developed applications, powered by multiple platforms.

These platforms can be based on incompatible hardware from different vendors, with different operating systems and different databases, and even different development languages. To try and pull together these different applications we added extra applications. The IT department has been responsible for the integration of these systems. Moving, harmonizing, and cleaning data, results in multiple copies of that data. We have placed huge demands on system resources during batch processing and expect users to wait for long running processes such as financial close, consolidations, and Materials Requirement Planning MRP.

Complex landscapes create fragmented business views of data. To obtain a holistic view, users are required to wait until consolidation is complete. Developing new applications in a complex landscape is also difficult, it takes time and is expensive to build and maintain.

There is too much IT complexity in most organizations, complex landscapes are costly to maintain with multiple skills needed.

You might also like: PLAIN TRUTH EBOOK

Complexity is stifling growth and suppresses agility and innovation, which is critical in today's digital world in order to survive. One Platform for all Applications The answer is to have all applications powered by one high performance platform.

This means a common architecture with only one store for all data regardless of type. Data is available to all applications in real time, no more data movement and no more management of multiple data stores. This means only one copy of data is needed for any type of access. Traditionally, systems were either optimized for transactions or analysis.

Analysis systems took on a different design approach. The hardware, database and data models were built around batch loading, aggregated storage and a focus on read intensive queries. No movement of data is necessary and we always work from the same single copy of the data for any requirement, whether transactional or analytical.

Advances in Technology How can one platform handle all applications and why did we not do this earlier? SAP HANA takes full advantage of the recent trends in hardware evolution to ensure it is able to handle such an ambitious challenge. Let's start with memory. Historically, the high cost of memory meant that only small amounts were available to use. This caused a serious bottleneck in the flow of data from the disk all the way to CPU. It did not matter how fast the processor was if the data can reach it quickly.

We now have access to huge amounts of cheap memory. With so much memory available we can store the entire database, of even large organizations, completely inside memory so we have instant access to all data and we eliminate wait times. We can lose the mechanical spinning disk and the latency it brings and rely on memory to provide all data instantly. Memory is no longer the bottleneck it once was.

To address large amounts of memory we need 64 bit operating systems. Let's now consider the CPU. In addition to huge memory, processors continue to improve at a phenomenal rate.

We now have high speed multi-core processors that can take on complex tasks and process them in parallel. This means response times for even the most complex analytical tasks, such as predictive analysis can be carried out in real-time. So if we have multiple CPUs each with multiple cores we have access to huge processing power to consume and process huge volumes of data in a minimal time.

Advances in the design of on-board cache means that data can pass between memory and CPU cores rapidly. In the past, even with large amounts of memory, this was still a bottleneck as the hungry CPUs were demanding more data and the journey from memory to CPU was not optimal.

And with modern blade server architecture, we can now easily slot in more RAM and more CPUs into our landscape to add more processing power or memory in order to scale up to any size. Introduction to SAP HANA SAP could have just kept same business application software that was written 20 years ago along with the traditional databases that supported them and installed all this on the new hardware.

There would be some gains but traditional databases and applications were designed around old, restricted hardware architecture. This means they would not be able to fully exploit the power of the new hardware with all the new developments we mentioned earlier. Put simply, the business software needed to catch up with advances in hardware technology, and so a complete rewrite of the platform was required.

Calculation view in sap hana studio

The platform is the software side of the equation that was built entirely by SAP. This means many applications are built in a two tier model, rather than a three tier model. For example, imagine an application that allows a project manager to quickly check all team members have competed their time sheets. This could easily be developed as a web application where only a web browser and SAP HANA is required, no application server is needed.

Everything the developer needs at design time is there, and what is needed at run time is also there. This includes text, spatial, graphic, and more.

However, it is not enough to simply store these new data types, we need to be able to build applications that can process and integrate this data with traditional data types, such as business transactions.

It stores data optimally using automatic compression and is able to manage data on different storage tiers to support data aging strategies. It has built in high availability functions that keep the database running and ensure mission critical applications are never down.

Further data footprint reductions are achieved because, we removed unnecessary tables and indexes. We also reduce the in-memory data footprint by implementing data aging strategies.

The benefit of this is that data that is used less frequently can be moved automatically from HOT to WARM store so we are not filling memory with data that is less useful. However, this data is still available whenever it is needed. Technically we could do that, but it would not be efficient. Most business applications refer to only a small subset of data for their day to day running, and that is typically the most recently created data. We also use temperatures as an easy way to describe where data fits on the scale of usefulness.

Active or hot data is the data that is very recent, or perhaps data that, although old, is the focus of a current analysis and is being processed. Passive data, usually called warm data, is useful data but less used. Cold data is rarely, if ever, used. In traditional systems data was either hot in the database or cold archived outside the database. There were usually never multiple temperatures of data due to the limitations of the technology at that time.

Big Data is a term often used, and this refers to the staggering amounts of data that is being collected, especially by machines, sensors, social media, and so on. In recent years, solutions have been developed for the storage of this type of data. One of the most popular solutions is called Hadoop. Hadoop is not a relational database, and its key role is to provide data storage and access to systems that require the data.

Hadoop and other Big Data solutions should be considered in the overall planning for data management. Push Down Processing to SAP HANA In the past, the key job of the database layer was to listen out for requests for data from the application server and then send that data to the application server for processing. Once the data had been processed the results would be sent back down to the database layer for storage.

This is done quickly using in-memory. Detailed data was summarized into higher level layers of aggregates to help system performance. On top of aggregates, we built more aggregates and special versions of the database tables to support special applications.

As well as storing the extra copies of data, we also had to build application code to maintain extra tables and keep it up to date. A backup to these extra tables was also required, so even the IT operations were impacted. In addition to aggregates, we have another inefficiency that we need to remove. Database indexes improve access speed because they are based on common access paths to data. But they need to be constantly dropped and rebuilt each time the tables are updated.

So again, more code is needed to manage this process. The traditional data model is complex, and this causes the application code to be complex. With a complex data model and complex code, integration with other applications and also enhancements are difficult, and simply not agile enough for today's fast moving environment.

We do not need pre-built aggregates. SAP HANA organizes data using column stores, which means that indexes are usually not needed - they can still be created but offer little improvement. As well as removing the aggregates and indexes from the database, we can also remove huge amounts of application code that deals with aggregates and indexes.

We are left with a simplified core data model and also simplified application code. Now it is much easier to enhance the applications and integrate additional functions. Choice of Configurations For on-premise deployments, SAP HANA is delivered as a brand new, all-in-the-box application where all software and hardware is provided and fully configured by certified partners.

There are many different configuration options available to suit all sizes of organization. Many customers already have hardware components and also software licenses that they would like to re-purpose and so this flexible approach ensures implementation costs are kept to a minimum.

This restriction does not apply for non-production installations, for example, development, sandbox. The following versions of Linux are supported: Flexible Deployment Options Cloud On Premise Hybrid Run all applications in Run all applications on Leverage right deployment option the cloud premise that meets business priorities Figure On-premise means the entire solution, the software, network, hardware is installed and managed by the customer.

A cloud deployment is managed by SAP and other hosting partners and this means customers do not have to be concerned with managing the infrastructure, they can simply get on with using and developing applications with SAP HANA. Another possibility is a hybrid approach where a combination of on-premise and cloud is used. SAP HANA is capable of handling any type of application from analytical, transactional, consumer facing, back office, real-time, predictive, and cloud and more.

SAP HANA is Central to SAP's Strategy With a single, scalable platform powering all applications, customers have an opportunity to simplify their landscapes and also to develop new, innovative applications that cover all data sources and data types. The real value in the virtual data models is the business semantics added by SAP.

Raw database tables are combined and filters and calculations added to expose business views ready for immediate consumption with no additional modeling needed. So instead of having to refer to multiple raw tables in your reporting tool, creating joins and unions manually, applying filters to add meaning to the data, you simply call a view from the virtual data model and the data is exposed. Whilst they are different technical approaches, they both deliver the same outcome, a virtual data model that exposes live operational data for analytics.

This could be achieved in a variety of ways using standard SAP data replication tools. Connect loT with Core Business Processes Traditional business systems are simply not ready to support the massive growth in device connectivity that is proposed by the Internet of Things loT.

Imagine having access to detailed machine data a few clicks away from a business transaction. Let's consider this scenario: A customer is disputing an item on their invoice and complains that the paint we supplied is too lumpy.

So we drill down from the invoice, discover the actual line that relates to the paint problem, we drill down to the batch that we supplied, then we drill down to the shop floor data to check the recipe for the paint was correct.

But wait, when we drill down to examine the data generated from the paint mixing machine we see it did report overheating problems between 2. We now need to talk to the engineers on the shop floor to find out why this was not detected and get back to the customer with a fast solution.

Sport Analytics — Provide fans with real time in game statistics in order to fully engage them. The NBA is already up and running with this, and many other sports bodies and teams have similar platforms. These include the following: SAP manages the entire solution. Customers just provide the business users! There are also many ready built applications from SAP and partners that are powered by SAP HANA and are available in the cloud and can be used standalone or integrated with existing applications.

You can develop Java applications just like for any application server. You can also easily run your existing Java applications on the platform. It is not public and is for dedicated customers and their applications. You can consider HEC as an extension to a corporate network.

Berg B., Silvia P. SAP HANA: An Introduction

So, customers pay for what they need and do not have to worry about procuring expensive hardware, software and skills to run their SAP HANA powered applications. Just bring your business users and any devices.

Choose the correct answers. Which are true statements? Learning Assessment 4. Choose the correct answer. True or false? Learning Assessment 8. Learning Assessment - Answers 4. Learning Assessment - Answers 8. SAP HANA uses a row and column store database and the physical storage can be either in-memory, on disk, or a combination of both.

There are a large number of engines available. The Application Function Library AFL is a repository of ready made common business functions and predictive algorithms that developers can use in their applications.

EIM is optional and is only installed if required. The recent addition of EIM means that customers no longer need to install and use these additional components for loading. Customers simplify their landscapes by using the built-in EIM capabilities. SDA enables the management of data at different temperatures.

SAP NetWeaver is still required to provide the business layer, the flow logic and the connectivity and orchestration with other applications. Of course, data has to be acquired and you may use the built-in EIM components or external data provisioning tools as mentioned earlier, in addition to remote sources.

This component is optionally used to support light, web based applications where a full application server and all its capabilities would be overkill. XS provides all the application services you need to access the required data from with SAP HANA's database, call the data processing engines and also the application logic.

XS has a built-in web server so applications are easily web enabled. Javascript is the application language used with XS. SAP HANA comes with all the development and testing tools required to build, deploy, and manage complete applications.

Evolution of the XS Engine Figure This new version is called XS Advanced and provides even more application services, employs open standards, and is capable of supporting larger and more complex applications written in many more languages. Classic XS is tied to the database server and so it was not possible to scale up the XS component separately. With XS Advanced it is possible to scale only that component, so more power can be given to the application processor and the database remains unaffected.

All new development objects are now created in the new XS Advanced architecture. XS Classic does not use Cloud Foundry, so customers with XS classic do not have the resources to develop a single application for use in the cloud and also on-premise. XS Advanced also uses Cloud Foundry architecture and so applications can be written once and deployed either on-premise or in cloud with no redevelopment.

This means applications are divided up into small chunks to allow the developer to choose the development language. It also means that it is possible to configure each part of application to consume more or less resources as needed. This is known as elastic computing. XS Advanced is built on a micro services architecture. For many people, it is the only interface they need. It is installed locally and is based on Eclipse and is developed in Java.

See separate lesson later for details. The host and instance this pair of details identifies the exact target system 2. You can optionally give each connection a description so it is easy to identify to purpose of each system when the list of connections becomes long. It is possible to export the list of connections to a file so these can be imported by others so they do not have to manually define the connections.

Of course the user credentials are not saved. You can also use the exported list of connections and share them as a central store. Each user creates a link to this central store and does not need to either create their own connections or import connections.

This means all connection information is managed centrally so any changes are made in just one place. Perspectives are predefined User Interface Ul layouts that contain several views.

A view is a pane of varying sizes within a perspective that provides specific information, such as a W here U se d list. Each view can be moved around via drag and drop. You can also customize a perspective by adding or removing views. Views can appear in multiple perspectives, for example, the S y s te m view is used in most perspectives as it presents a hierarchical list of objects in each SAP HANA system that is useful to everyone.

Save Perspective As Fast 9 Catalog Reset Perspective It is possible to have several perspective open at the same time, and to switch from one perspective to another. To do so, in the perspective switcher in the upper-right corner of the screen, choose the perspective you want to open. Adding a View to a Perspective Figure Eclipse is an industry standard open source software product and comes with many ready made views.

For this reason, you will see a lot of views in the S h o w V iew dialog box. This includes the S y s te m s view. To customize a view, choose the V iew M e n u button, and choose C u s to m iz e View Resetting a Perspective Any perspective can be reset to its default layout in order to restore the default views in their original positions and sizes. For example: This user must be active. The landscape XML file does not contain a password. You will have to specify the user and password for any system added to the S y s te m s view.

The Systems View The S y s te m s view lists all the systems that have been registered manually, or by a landscape import. For each system, the content is organized as follows: All these objects are organized into schemas. Schemas are used to categorize a database content according to customer defined groupings that have a particular meaning for users. Schemas also help to define access rights to the database objects.

From a modeling standpoint, schemas can help to identify which tables to use when defining information models. But a model can incorporate tables from multiple schemas. Schemas do not limit your modeling capabilities.

All the information models that will be created in the modeler will result in database views. By default, all the systems that are listed in the S y s te m s view appear in the S y s te m M o n ito r view.

You get the most important information about system status, alerts, as well as disk space, memory and CPU usage. You can customize this view by adding or removing columns. Alternatively, you can right-click in the S y s te m M o n ito r v ie w and choose C o n fig u re Table. If you want to filter the list of systems that are shown in the view, right-click in the S y s te m M o n ito r view and choose S y s te m F ilte r.

In this view, you can do the following tasks: M o d e le r The Q u ic k V iew is a practical entry point, dedicated to the modeler perspective. From this view, you can create, manage and transport information models packages, views , define or execute data provisioning and define schema mapping, and so on. You can define your favorite actions for example, E x p o rt, Im p o rt, and V alida te , and display only a custom list of these favorites.

Figure You actually select both a System and a User logged on to the System. If you are logged on to the same SAP HANA system with two or more different users, the action will be authorized based on the privileges of the user you have selected.

If you have closed the Q u ic k V iew and want to reopen it, you can do one of the following actions: Working with Interfaces for Administrators and Developers Note: The Q u ic k V iew only displays within the M o d e le r perspective. The information views, along with other modeling objects such as analytic privileges or procedures, are organized in packages. Each package is a repository that you can assign to a delivery unit in order to transport the objects it contains.

What would you find in a package? This is used by application developers. For example, here is where you would create Javascript and HTML that will be used in your applications. There are plenty of tools to support the developer including trace, debug, code prompts, check-in, and check-out. You will perform the following tasks: If you are prompted to choose a folder to store settings, use the default location and choose S u b m it.

If you are prompted to choose a workspace folder, leave the defaults unchanged and choose OK. If you are asked to create a password hint in case you forget your password , choose No.

Field Value Host name wdflbmt Enter your credentials as given in the following table: Explore the S y s te m s view by expanding the nodes. In the Catalog node, the System has automatically created a schema for you.

The schema name is the same as your user name and is the default schema whenever you work with database objects such as tables. If you want to work with tables and other database objects, choose C a talog.

Most traditional enterprise relational databases are row based because this is regarded as the optimal design for a transactional system.

Both table storage types are needed in a system that handles both transactional and analytical applications in the same database. Column and Row Store The figure, Column and Row Store, shows that the key difference between row and column store is the way the same data is organized.

Column store tables are efficient for analytical applications where requests for sets of data are not predictable. Usually only limited columns are required. With column store, only the required columns are loaded to memory so we avoid using up memory with columns that will never be used.

Also the data is arranged efficiently with all values of a column appearing one after another. This continuous sequencing of the column values is preferred by the CPU which is able to scan the values efficiently without having to skip over values. A few more positive aspects of column store: This helps to reduce the complexity by avoiding the need to constantly create drop and rebuilding indexes.

It is easy to alter column store tables without dropping and reloading data. Column store tables are optimal for parallel processing with each core able to work on a different column. The downside to column store is the cost of reconstructing complete records from the individual column store if all columns are required by the application. This is the case when the application is transaction based and so all fields are usually needed for a record update and must all be retrieved.

This would be possible with column store but would be slower than if the storage was row based where all the columns are always held together and can be read quickly. Row storage is still needed to support transaction processing where all columns need to be retrieved.

Often an application is both transactional and also analytical.

In this case you must decide which is the best storage method to use. You cannot have a table that is both row and column storage. It is easy to convert a table from row to column and vice versa, and you do not lose the data when doing this. Compression is most impressive when there is a lot of repetition in the data values. For example, a huge sales order table where the customer type A, B or C is stored on each customer order.

In this case the customer type would appear a huge number of times in the column. Compression strips out the repetition and stores only each unique value once in a dictionary store. SAP HANA then uses integers to represent the business values in the original store as this takes up far less space and is also very efficient for scanning.

SAP HANA links the dictionary entries to the actual table using special reference stores that identify the position of where the original value was and its corresponding business value from the dictionary store. The processing happens invisibly. With the new hardware architecture, especially utilizing the new multi-core processors we can ensure instant responses by spreading out the processing task across the cores.

Parallel Processing SAP HANA automatically spreads the workload across all cores and ensures all parts of the hardware are contributing to the throughput.

SAP HANA is scalable, which means you can easily add more processors as required in order to increase the parallelization and therefore the speed of processing. Column store tables are automatically processed in parallel.

Each column can be processed by one core.

Course Overview

For column store, tables you can define partitions on each column. This means that only the required partitions are read to memory. For example, if a query requested only current year data, then all other years in the column would be ignored. Partitions can be created based on known popular business values or by simply allowing SAP HANA to split up large columns in an arbitrary way.

Since SPS10 this has increased dramatically to 16, partitions per column table. Data temperatures Figure It is not a separate component. There are two reasons we need the disk layer: To provide an area to unload less important data when memory is full. We call this inactive data. To enable data recovery if the power fails. We will cover reason 2 later when we discuss high availability.

For now let's focus on reason 1. However, most organizations will size their SAP HANA system with only enough memory to hold the core data and will utilize disk to store the remaining data.

This means that there will be competition with the data for memory. When memory is full the data that is used less often is automatically moved to disk to make way for new data. The larger the memory, the less displacement is needed. Remember also that some space is needed in memory as a working space for calculations. An organization usually values their recent data higher than older data, and often find themselves accessing the recent data more frequently than the older data.

Conceptually, data can be classified into temperatures. For data that is accessed frequently, we call this hot data. Data that is accessed less frequently is called warm data. Data that is rarely accessed often retained only for legal purposes is called cold data. For now, we will focus on hot and warm data.

Quite simply, any data that is accessed by any application always comes from memory. So this means that if the table is sitting in the persistent layer, the moment it is needed, the table is then automatically loaded to memory.

Column tables can be partitioned and SAP HANA is smart enough to know only to load the required columns and partitions to memory and leave the unwanted columns and partitions in the persistent layer. Delta Merge Updating and inserting data into a compressed and sorted column store table is a costly activity. This is because each column has to be uncompressed, the new records are inserted and then recompressed again, and thus the whole table is reorganized each time.

For this reason, SAP has separated these tables into a Main Store read-optimized, sorted columns and Delta Store write-optimized, non sorted columns or rows. There is a regular automated database activity that merges the delta stores into the main store. This activity is called Delta Merge.

Queries always run against both main and delta storage simultaneously. The main storage is the largest one, but because its data is compressed and sorted, it is also the fastest one.

Delta storage is very fast for insert, but much slower for read queries, and therefore kept relatively small by running the delta merge frequently. The delta merge can be triggered based on conditions that you can set.

If this is true then the delta merge is triggered. Delta merge can also be triggered by an application. Staying on top of the delta merge is critical to maintaining good performance of SAP HANA and the administrator is responsible for this. Refer to training course HA to learn more about delta merge. Multi Tenancy With multi-tenancy there is a strong separation of business data and also users who must be kept apart.

Each tenant has its own isolated database.

Business users would have no idea that they are sharing a system with others running different applications. The system layer is used to manage the system-wide settings and cross-tenant operations such as backups. The benefit of a multi-tenancy platform is that we can host multiple applications on one single SAP HANA infrastructure and share common resources in order to simplify and reduce costs. Multi tenancy is the basis for cost-efficient cloud computing. You can skip this step if you have already logged on.

Locate the table M A R A by using a filter on the ta b le s node. Open the definition of table M ARA and identify whether the table is row or column store. Identify the key columns of table M AR A. Identify the number of records loaded to the table and also the storage used by the main and delta areas.The recent addition of EIM means that customers no longer need to install and use these additional components for loading.

It also allows you to identify entities locations, persons, and dates in an unstructured text. Remember also that some space is needed in memory as a working space for calculations. XS Advanced also uses Cloud Foundry architecture and so applications can be written once and deployed either on-premise or in cloud with no redevelopment. Now it is much easier to enhance the applications and integrate additional functions. Enable the samsung-sap framework.