I'm trying to create/understand APDU content for updating EFmsisdn file on a USIM, as stated in ETSI TS-131102, section 4.2.26 the content of the file is as follows:
I have the following valid ISO-7816 command for selecting the file and updating the first record:
00A4090C047FFF6F4000DC010422FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF06815545443333FFFFFFFFFFFFFF
What I don't understand is the number of fillers (0xFF). In the table, it says 1 to X bytes for alpha identifier. What is the alpha identifier and how can I know the length?
I appreciate any hint.
My understanding is that EFmsisdn follows a similar convention to EFadn.
To answer your question:
What is the alpha identifier?
It is a text identifier for the number. So in a phonebook it will be the name of the contact. Some operators may set this to "My Number" for example.
You can get its length by using the total length of the returned result and subtracting the 14 fixed Mandatory bytes.
For a little more info checkout EFadn in TS 131 102.
Related
I am testing CASESAFEID(Id) function to get the 18-digit ids in my report. I created a formula field and used that field in a report. I am noticing that the last 03 characters of most of the records in this field are the same. I could not find the reason or logic for these 03 characters on google search to posting it here.
My formula field:
My report:
I am using trailhead playground for this testing.
Yes, that can happen. IDs that have uppercase letters on same positions will have same 3 "digit" suffix. You don't have to worry about that? There are some posts if you're really interested in the algorithm.
https://astadiaemea.wordpress.com/2010/06/21/15-or-18-character-ids-in-salesforce-com-%E2%80%93-do-you-know-how-useful-unique-ids-are-to-your-development-effort/
https://salesforce.stackexchange.com/questions/1653/what-are-salesforce-ids-composed-of
https://developer.salesforce.com/docs/atlas.en-us.object_reference.meta/object_reference/field_types.htm (scroll down to ID field type)
They're essentially a checksum-type value to ensure that valid Salesforce Ids do not differ from one another only in case. This provides safety for tools like Excel that treat abc and AbC as the same value.
The behavior you are observing is normal. There's no need to test this formula function as such; it's a standard part of the platform.
I have a large number of internal path references in a LabVIEW project. Each path is entered manually into a bundle function along with a reference to a numeric indicator on the block diagram. Because I have a lot of paths and therefore a lot of numeric indicators, the block diagram is a big mess.
I want to streamline this by having a CSV file with a nx2 array. on column 1 I want to have the path of the internal reference itself. On column 2 I want to have the name of the numeric indicator (already placed in the block diagram and front panel) that corresponds to the path in column 1. Using a for loop I want to loop over each row of the CSV file and using a bundle function, bundle the path (on index 0) and a reference to the numeric indicator itself. Here is the actual problem I am having since I don't know how to dynamically assign the name of the numeric indicator (on index 1) to a digital reference as the loop executes. See the state of my current VI for more reference. Please help me find a way to dynamically crate digital reference to each numeric indicator as the loop goes through.
Right now, the closest I got to the goal is to get the name of the numeric indicator (index 1 on the CSV) assigned to a string reference, but my numeric indicators are still unreferenced and not connected to the bundle function.
Note that the column 2 in the CSV has the same name as the numeric indicators, so "numeric","numeric 1", "numeric 2", "numeric 3", "numeric 4"
Read this https://forums.ni.com/t5/LabVIEW/How-to-get-control-reference-from-control-indicator-label-name/td-p/3884075 to learn how to obtain control/indicator reference by its name. That should solve your problem. Use first CSV column as file path and the second column to obtain indicator reference. Then bundle two of them and that's it!
I have made string for GS1 Datamatrix
è010506060985000521sn1234567890ab 1002TRIAL003 17200228
ASCII 232
(01) Product Code (aka GTIN)
(21) Serial Number
ASCII 29 (aka aka Group Separator)
(10) Lot/Batch
ASCII 29 (aka aka Group Separator)
(17) Expiry Date
I am passing this string to Dev express Control – symbology as Datamatrix and compatible mode as ASCII.
This barcode scan correctly click here to view barcode as GS1 Datamatrix, but when I sent this string to our printing person in China, he did printed but when I am scanning his barcode getting error “Unknown encoding”.
I thing their system is not able to encode ASCII 232 – “è”.
Is any alternate way?
I am just replacing FNC 1 Start changer ASCII 232 to ASCII 29, is it correct way? click here to view barcode Is it GS1 Datamatrix?
(I just scan that in one mobile app in that it comes as GS1 Datamatrix but when did I scan into another app it just come as Datamatrix)
I want to achieve GS1 Datamatrix...
Thanks
this issue is totally dependant on the hardware used. The way to indicate FNC1 character may differ between printer family/type. Do you have info on which one is used in your case?
First, your printer partner should check himself the label he's creating (there is a GS1 app easy to use on smartphone to do that), so he can directly see if the expected information are present and well encoded.
Then, you should check which printer type he is using and which software is used to create the printer mask/job. I know lots of people are using NiceLabel for example, but I remember some issues can be found on the FNC1 character is you are using some recent Zebra printer for example. This is something the printer SAV can probably help with if it's something similar.
[EDIT]:In case of doubt this can help but you probably have it already.
Based on what you said, your part is acting like a scanner, so check chapter 2.2.1 => Important: In accordance with ISO/IEC 15424 - Data Carrier Identifiers (including
Symbology Identifiers), the Symbology Identifier is the first three characters transmitted by
the scanner indicating symbology type. For a GS1 DataMatrix the symbology identifier is ]d2
I have a series of CSV's I import into a database via Datastage. I am attempting to do this using RCP and schema files.
I generate the schema files from the CSVs using an accompanied master table list that comes with the CSVs.
I am down to one problem. When I find that a numeral is the last column in a particular table, it is the last entry in a schema file. My problem is null handling. The CSV is comma-delimited, double quoted for strings, and no data for null.
The master list identifies some of these number columns as number(), which is indicative of an oracle description of the output. To that end, I am trying this:
:nullable decimal[38,9] { default=0, text };
in this example, the scale and precision are defaulted, to 38,9....unless specified elsewhere, such as decimal[10,2].
A null entry results in this error:
When validating import/export function: APT_GFIX_Decimal::validateParameters: the decimal "text" format is variable length, and no external length is specified;
you should possibly specify an appropriate "width" property; external format: {text, padchar=32, nofix_zero, precision=38, scale=9, round=trunc_zero, ascii}. [decimal/impexp.C:939]
so I tried:
:nullable decimal[38,9] { default=0, text, width=47 };
in this example, the scale and precision are defaulted, to 38,9. The width is the sum of the two values (38 + 9 = 47...unless specified elsewhere, such as decimal[10,2].
and I got:
ODBC_Connector_3,0: Input buffer overrun at field "", at offset: ### [impexp/group_comp.C:6006]
Lastly, I tried exactly what it said, and did this:
:nullable decimal[38,9] { default=0, text, padchar=32, nofix_zero, precision=, scale=, round=trunc_zero, ascii, width=47 };
in this example, the scale and precision are defaulted, to 38,9. The width is the sum of the two values (38 + 9 = 47...unless specified elsewhere, such as decimal[10,2].
For this third time, I received this error: Input buffer overrun at field "", at offset: ### [impexp/group_comp.C:6006]
Has anyone ran into this? this only happens if decimal is the last column in the table.
my record settings are: {intact, final_delim=none, record_delim='\n', charset='UTF8', delim=','}
Thank you very much.
I had the same issue. I tried to put the solutions mentioned in the above answer as well as question. It didnt work. Turned out, my target column had - decimal(14,10), i.e. 4 digits before decimal point and 10 digits after decimal point. I was getting null values in the target even though i had actual data at the source. But the issue was source had more than 4 digits before the decimal. I modified target and source column to decimal(16.10). On top of this, like mentioned in the question, we shouldn’t put decimal columns in the end when we are using schema files. I put a string column in the end at source, Combined both of these and viola! I could see my data properly loaded in the target.
I am a newbie to ABAP. I am trying this program with open sql, and when I execute the program, the first column's data is always missing. I have looked up and the syntax appears to be correct. I am using kna1 table, the query is pretty simple too. If anybody notice the issue, please help me out.
DATA: WA_TAB_KNA1 TYPE KNA1,
IT_TAB_KNA1 TYPE TABLE OF KNA1,
V_KUNNR TYPE KUNNR.
SELECT-OPTIONS: P_KUNNR FOR V_KUNNR.
SELECT name1 kunnr name2
INTO TABLE IT_TAB_KNA1 FROM KNA1
WHERE KUNNR IN P_KUNNR.
LOOP AT IT_TAB_KNA1 INTO WA_TAB_KNA1.
WRITE:/ WA_TAB_KNA1-KUNNR,' ', WA_TAB_KNA1-NAME1.
ENDLOOP.
This is a classic - I suppose every ABAP developer has to experience this at least once.
You're using an internal table of structure KNA1, which means that your target variable has the following structure
ccckkkkkkkkkklllnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN...
with ccc being the client, kkkkkkkkkk being the field KUNNR (10 characters), lll the field LAND1 (3 characters), then 35 ns for the field NAME1, 35 Ns for the field NAME2 and so on.
In your SELECT statement, you tell the system to retrieve the columns NAME1, KUNNR and NAME2 - in that order! This will yield a result set that has the following structure, using the nomenclature above:
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnkkkkkkkkkkNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
Instead of raising some kind of type error, the system will then try to squeeze the data into the target structure - mainly for historical reasons. Because the first fields are all character fields, it will succeed. The result: the field MANDT of your internal table contains the first three characters of NAME1, the field KUNNR contains the characters 4-13 of the source field NAME1 and so on.
Fortunately the solution is easy: use INTO CORRESPONDING FIELDS OF TABLE instead of INTO TABLE. This will cause the system to use a fieldname-based mapping when filling the target table. As tomdemuyt mentioned it's also possible to roll your own target structure -- and for large data sets, that's a really good idea because you're wasting a lot of memory otherwise. Still, sometimes this is not an option, so you really have to know this error - recognize it as soon as you see it and know what to do.