I have been learning SAP for a year and a half by now, and I can guarantee that it was a long journey until I started to sympathize with the tools and technology itself.
Besides the wide access to different tools, even paid versions, my biggest struggle was to get sample data from SAP that fit their system structure and complexity.
Given this challenge, this last Friday I had the opportunity to attend something called "SAP CodeJam". It's one of the events from SAP Community and it seems that this happens all around the world.
In this lesson, the expert in CAP (Cloud Application Programming model), DJ Adams, presented a sample of how to do a CAP Service Integration. I will be honest with you, I don't really dominate the SAP Business Application Studio and even less the CAP Model from SAP. The highlight was, that you can alternatively use Microsoft VS Code and Docker Desktop, which made the process way easier.
Here is the Git repository of the CodeJam: https://github.com/hncelver/cap-service-integration-codejam
If you are more interested in this process in specific, I really recommend to checking out!
Now let's talk about SAP S/4HANA sample data. In this workshop, DJ Adam, used data from an SAP Sandbox. In order to get the data, we accessed the SAP Business Accelerator Hub, it's an SAP platform that combines different SAP products and provides their respective API to access the data.
In the case presented, we are used the Business Partner (A2X) data, and in the exercise 3 from the git, DJ Adam shows how to see the model to understand better which data are we working with.
So, how to get the data?
I did a Python code in which I return the API using the xml.ETREE library:
#import libraries o interest
import requests
import pandas as pd
import xml.etree.ElementTree as ET
# Get the URL in SAP Business Accelerator Hub Request
url = "https://sandbox.api.sap.com/s4hanacloud/sap/opu/odata/sap/API_BUSINESS_PARTNER/A_BusinessPartner?$top=50&$inlinecount=allpages"
# Get your personal API Key
headers = {
"APIKey": "{key from site}"
}
response = requests.get(url, headers=headers)
root = ET.fromstring(response.content)
ns = {
'atom': 'http://www.w3.org/2005/Atom',
'm': 'http://schemas.microsoft.com/ado/2007/08/dataservices/metadata',
'd': 'http://schemas.microsoft.com/ado/2007/08/dataservices'
}
entries = []
for entry in root.findall('atom:entry', ns):
props = entry.find('atom:content/m:properties', ns)
if props is not None:
bp = props.find('d:BusinessPartner', ns)
name = props.find('d:BusinessPartnerFullName', ns)
hash = props.find('d:BusinessPartnerUUID', ns)
entries.append({
"BusinessPartner": bp.text if bp is not None else "",
"BusinessPartnerFullName": name.text if name is not None else "",
"BusinessPartnerUUID": hash.text if name is not None else ""
})
After this step, you get the data in a DataFrame format and you can play around just like with any other database.
It's important to notice that not all fields are filled with information, but the main key fields have some connection, enabling the creation of a model.
Top comments (0)