User and authorization administration in the Java stack is a pain in the neck – that's a fact! The identity management tools are inferior as compared to the ABAP stack, but nevertheless there are ways to make your life a bit easier…
Mass user maintenance in the UME
When it comes to mass user creation/modification in the Java UME (database only, no ABAP- or LDAP-data source), no tool like SU10 exists and many admins choose the hard way of creating users one by one... but wait... the "Identity Management" screen has an "Import" button:
Standard Format for UME imports
The screen behind that "Import" button provides not much more than a text field, which needs to be populated with user master data in the correct format (btw.: the amount of importable data is limited to 1 MiB).
The import format is documented here, but SAP provides no easy solution to create data in that format.
This is – you probably guessed it – the point, where my solution comes in.
Generally speaking, the "Standard Format" – as SAP calls it – is similar to the format of many .ini files and thus quite simple.
It can be used to create and modify users, groups, and roles — for users, a typical import record looks like this:
Squeeze mass user data into the Standard Format
For this task, I've prepared a very simple Excel file for you… download it here:
You can insert the user name, first and last name, password and up to three roles for up to 100 users into column A-G.
The formula in column H generates the expected format from the input data.
When finished, simply mark the cells in column H starting from line 2 (i.e. without the header).
Unfortunately, Excel is a very smart tool 😕 and automagically inserts quotation marks around the copied cells.
You need to remove these quote signs manually from the copied data…
Alternatively, you can also copy the clipboard's contents into an empty Word document, then copy everything again – that way the quotes are removed, too.
Afterwards, paste the data into the text area on the Import screen of the Identity Management; then click "Upload".
The protocol on the next screen contains information about the import result.
See you next time!
Recently I wrote an article for a magazine published by the German-language SAP users group (DSAG).
In this post, I’d like to share an English translation with you (the original German version is available here: http://blaupause.dsag.de/berechtigungstrace-mit-komfort-funktion).
Authorization trace with comfort function:
One of the numerous new features of Enhancement Package 6 is the authorization trace via transaction STAUTHTRACE. In principle, it works like the system trace ST01, but is limited to authorization checks. This makes it a valuable tool for authorization admins and provides comfortable functions.
So far, it was necessary to start an authorization trace on all application servers of a system separately, unless the relevant server was known beforehand. Transaction STAUTHTRACE simplifies this and allows starting a trace on one or more servers in a single step:
Without an explicit selection, the system-wide trace is automatically started on all available servers:
The evaluation section in the lower half of the screen offers detailed options to analyze the result and is much advanced in comparison to ST01.
In a system-wide trace, the selection of the application server in the topmost section is also taken into account.
The option "Evaluate Extended Passport" is extremely handy, as it enriches the trace result with data from the system's kernel statistics (transaction STAD).
This additional information is helpful when it comes to RFC calls from other systems and consists of the following fields:
- "Initial Component" — the calling system, instance and client
- "Action Type" — e.g. a batch job run or a transaction call
- "Initial Action" — e.g. the name of the job or transaction code
The result is finally displayed in a nice, filterable ALV grid and not in that ugly ST01 list view.
Additionally, it is possible to dive into each line and jump to the affected user, the authorization object and its documentation as well as the line in the source code that triggered the authorization check. Simply double-click in the list or use the menu:
How to use the trace result in PFCG
The result of an authorization trace can be used in PFCG directly now - no matter, whether it comes from STAUTHTRACE or the traditional ST01.
This can be achieved in two ways:
- Maintenance via the role menu
The "Import from Trace" option in a role's "Menu" tab allows importing the called applications from the trace: Transactions (S_TCODE), External- or Web-Services (S_SERVICE) and RFC Function Modules (S_RFC).
Unfortunately, if you import a transaction call, only the tcode is adopted from the trace - the other values that are checked during transaction start and execution are ignored; instead, the suggested values from SU24 are used.
- Maintenance of authorization values
In the role's authorization data maintenance screen, the new button "Trace" can be used to import the values that were checked from the result into the role.
In the below example, the role already contains the object S_USER_GRP – but no values yet. The actual check in this case used 02 for the field ACTVT and the user group (CLASS) was "SUPER" – these values can easily be imported from the trace data with some clicks.
💡 The new trace functionality of EhP 6 is a great feature for the analysis of authorization needs and problems - a neat enhancement of the existing toolbox!
A few months passed since my last post – so it's about time for new one! 💡
User name as a code condition
The system field SY-UNAME contains the name of the currently logged-on user and is quite frequently used by developers to facilitate tests by adding special conditions to their code. The block of code that is executed depending on the current user's name is usually only intended for the developer him-/herself.
Although developer guidelines almost always include the obligation to make use of AUTHORITY-CHECKs, these checks might interfere with functional tests – and people might want to circumvent them (just for the tests, of course). No matter what the intention was, this approach leads to programs that do authorization checks for all users – except for the developer of the code... bad thing!
The following code snippet is probably one of the most prominent examples:
IF sy-uname <> 'DEVELOPER'. AUTHORITY-CHECK ... ENDIF.
Right after the successful test phase, the code is transported to production and the conditional code might never be made universal...
If we consider malicious behavior, such code is called a backdoor and/or hidden function and this means that there is a need for action (at least to protect your developer colleagues)!
How to detect it
To find affected code, the SAP standard report RS_ABAP_SOURCE_SCAN is of great help — you can use it to search for plain strings or expressions in reports, classes, etc.
Since we're interested in IF conditions that check the value of SY-UNAME, I'd suggest to search using "IF .*sy-uname" as the expression and tick the checkbox "String is standard expression".
In the sample below, I limited the code to search in to programs with name Z*, but you might probably want to adjust this according to your needs (e.g. your registered namespaces).
The result shows two different conditions that use SY-UNAME in a possibly evil way:
The search expression above is rather straight forward...
Unfortunately, it can be tricked easily by a developer, who knows it:
DATA: foobar TYPE syuname. foobar = sy-uname. * Obfuscated condition IF foobar <> 'MYSELF'. AUTHORITY-CHECK ... ENDIF.
So – when you establish controls to prevent the usage of user-based conditions, this is something to keep in mind.
Humans are usually better at detecting fuzzy patterns that computers are... 😎
Code that is bypassed based on the value of SY-UNAME should never be used!
➡ All instances of hard-coded user names in customer code used on productive systems should be corrected.
➡ Controls should be established to prevent such code from being transported.
You might want to integrate the use of the SAP code inspector into your transport process.
After a relaxing summer holiday, it's time to fulfill the promise I made in my last post and provide the evaluation report for our log of RFC calls.
If you don't know what I'm talking about, please read the first part of this article.
This report basically parses the RFC log and shows the function groups that would've been required to execute the called modules.
In addition, it finds out, whether the respective users currently have the required S_RFC authorization — therefore, it allows you to focus on those entries, where the authorization is missing.
- Create a new program in SE38 and copy-paste this source code.
- Set a program authorization group in the attributes section.
- Activate the program & execute it.
The selection screen should be rather self-explanatory:
There is only one noteworthy feature: the "Client" field is pre-filled with all clients, for which no RFC connection could be determined automatically. The report checks the logical systems for all local SAP clients and tries to reach them via the assigned RFC connection (that should normally work in a well-configured system :wink:). If this attempt fails, the respective client is excluded from the evaluation. Just log on to the excluded client(s) and run the report locally – this will always work!
The screenshot below shows an exemplary result. All lines with function groups, for which authorizations exist, are hidden per default; to unhide them, just remove the filter (marked in red below).
The icons in the "Auth. check" column have the following meaning:
» User has the required authorization — filtered out per default
» S_RFC authorization is missing — this is what we're interested in
» User is locked
» User does not currently exist
In this article, I'll show you a handy way of identifying the S_RFC authorizations your users need; this method helped me a lot recently.
Generally, you might be interested in this topic, because…
- … you were asked to raise the value of profile parameter
auth/rfc_authority_check from zero to a greater value
- … you need a practical approach to improve your S_RFC authorizations
- … you updated your SAP kernel to a patch level ≥ 7.20-400
or ≥ 7.21-041 (see SAP Note 1785761)
The authorization object S_RFC consists of three fields, but only one of them is of interest for us: RFC_NAME – which is checked against the called function module's group (the other two fields have only one possible value each, so we'll ignore them here).
I opted for a heuristic approach to determine values for that field… so first we'll collect a list of function module calls that occur on a productive system. In part 2 of this series, we'll use that list to determine the affected function groups and derive the required S_RFC values from that.
Unfortunately, this approach assumes that all required RFC calls succeed – so during the analysis phase, S_RFC authorizations have to be (or stay?) oversized to ensure no authorization problems distort the result. I'll leave it to you, how you deal with that…, but you might want to think about setting the profile parameter auth/rfc_authority_check to zero… Danger, Will Robinson! → this has security implementations! 😕
Obtaining a list of called function modules per user is possible in various ways:
- the Security Audit Log (tcode SM19/20 » audit class "RFC call")
- the Business Transaction Analysis (tcode STAD)
- … if you have another good idea, please leave a comment …
Using the Security Audit Log would imply some nasty problems: the log size per day is limited (parameters rsau/max_diskspace/*); all logs generated after that limitation is reached are lost.
The functionality of tcode STAD on the contrary quite exactly matches what we need. Furthermore, there is no need to configure anything, as the statistics are recorded anyway (in fact the profile parameter stat/level has to be set to 1… but that's the system default). The structure which is used to record the statistics contains a field that holds the called function modules — so another benefit of the latter method is that we don't have to split a text string (like the one stored in an Audit Log message text, e.g. "Successful RFC Call RFCPING (Function Group = SYST)").
I chose the second solution — evaluating statistics from STAD —, because it seems to be smarter, more reliable… and gives me the opportunity to code a bit! 😉
The next step is to create a new report called ZS_STAD_EXTRACT_RFC_CALLS and copy-paste this source code.
Then you need to set up two new customer tables that hold the data we want to collect.
Go to SE11 and create the tables ZSSTAD_RFC_DATA and ZSSTAD_LASTRUN.
I'd suggest using the following settings in the subsequent steps:
- Delivery class "A" = application table,
- Data class "APPL1" = transaction data, transparent tables (in: Technical settings),
- Size category "0" = up to 100.000 entries (in: Technical settings) and
- Enhancement category "Can be enhanced (deep)" (menu: Extras → Enhancement category)
The field definitions can be found in the top comment of the report source code; use them as shown below:
Then please repeat these steps for the second table.
Last but not least you should schedule the report to run every hour — that's a good value because the runtime of the report stays rather short and there's no danger of losing data (the retention period for STAD data is usually 48 hours, because the statistics files are written every hour and the parameter stat/max_files determines the number of files kept – 48 per default).
You also might want to increase the profile parameter stat/rfcrec, which determines the maximum number of RFC calls in a session that will be recorded in STAD. The default value of 5 is probably not sufficient for all cases!
Please check SAP Note 1964997 for information on the parameters stat/rfc/distinct and stat/rfc/distinct_depth, which are also relevant. Thanks to Christian Wippermann for pointing me to this!
So what does it do?
The report reads all statistics records since the time it was last started (which is saved in table ZSSTAD_LASTRUN) or — if that table contains no values — the ones since one hour ago. The records are filtered for RFC calls (all other record types are discarded) and the called function modules' groups are determined.
As the last step, this information is saved to table ZSSTAD_RFC_DATA.
- the date (DATUM),
- SAP client (MANDT),
- calling user (UNAME),
- called function module (FUMOD),
- the respective function group (FUGRP) and
- the number of calls (NCALL) per line.
In the below example, the user SAPJSF called RFCPING 22 times on the 4th of May 2013 in client 000:
The information in this table will later be used to determine the values for S_RFC.
In part 2 of this series, I'll post a nice evaluation report for the above log…
See you then!