DeTokenization for Positional Type Tokenized Input File With Stream Data Using Detokenization.properties File
In this sample, using Bulk DeTokenization, a positional type tokenized stream data file will be detokenized using detokenization.properties
file.
Creating the Input Data File
Below is the data that will be used to populate the tokenized.txt
file:
1AnnelKochiFsingle0553674881372549Vvym9@wxardhv.gwt2BuickEmmeiMsingle9845887934409044Nush2@nvgeulx.ddn3FrankHardyMsingle7034014699844008vsoje@mnaprco.asu4MackiGowenMsingle4177420022619493Icqqx@fqtmucn.iug
Setting Parameters for Detokenization.properties File
#####################
# Input Configuration
# Input.FilePath
# Input.Type
#####################
#
Input.FilePath = C:\\Users\\rajeskum\\Desktop\\SaveE\\tokenized.txt
#
Input.Type = Positional
################################
# Positional Input Configuration
# Input.Column0.Start
# Input.Column0.End
# ...
# Input.ColumnN.Start
# Input.ColumnN.End
################################
#
Input.Column0.Start = 0
Input.Column1.Start = 1
Input.Column2.Start = 6
Input.Column3.Start = 11
Input.Column4.Start = 12
Input.Column5.Start = 18
Input.Column6.Start = 34
#
#
Input.Column0.End = 0
Input.Column1.End = 5
Input.Column2.End = 10
Input.Column3.End = 11
Input.Column4.End = 17
Input.Column5.End = 33
Input.Column6.End = 50
###########################################
# DeTokenization Configuration
# DeTokenizer.Column0.TokenVault
# DeTokenizer.Column0.TokenFormat
# ...
# DeTokenizer.ColumnN.TokenVault
# DeTokenizer.ColumnN.TokenFormat
############################################
#
# DeTokenizer.ColumnN.TokenVault
#
# Specifies the name of the token vault wherein plain text corresponding to tokens of
# this column is stored. If the column does not need to be DeTokenized, do not specify
# this parameter. If this parameter is specified, all other DeTokenization
# parameters for the same column must also be specified. Token vault
# specified in this parameter must exist before running bulk migration.
#
DeTokenizer.Column5.TokenVault = TEST1
DeTokenizer.Column6.TokenVault = TEST1
#
# DeTokenizer.ColumnN.TokenFormat
#
# Specifies token format that will be used to DeTokenize this column. If the
# column does not need to be DeTokenized, do not specify this parameter. If
# this parameter is specified, all other tokenization parameters for the
# same column must also be specified.
#
# Valid values
# <number>
# 0 for plain Text
# 6 for Masked Token
#
DeTokenizer.Column5.TokenFormat = 0
DeTokenizer.Column6.TokenFormat = 0
######################
# Output Configuration
# Output.FilePath
# Output.Sequence
######################
#
Output.FilePath = C:\\Desktop\\migration\\Detoken.txt
# Specifies the file path where the intermediate temporary chunks of
# outputs are stored.
#
# Note: If no intermediate file path is set, then the path specified in
# Output.FilePath is used as the intermediate file path.
#
Intermediate.FilePath =
#Set positive value for columns to be detokenized. For example column 5 and 6
#have been set positive below, so now only these two columns will be detokenized.
Output.Sequence = 0,-1,-2,-3,-4,5,6
# TokenSeparator
#
# Specifies if the output values are space separated or not.
# Note: This parameter is ignored if Input.Type is set to Delimited.
#
# Valid values
# true
# false
# Note: Default value is set to true. TokenSeparator = true
Note: This parameter is not applicable when StreamInputData is set to true.
#
# StreamInputData
#
# Specifies whether the input data is streamed or not.
#
# Valid values
# true
# false
# Note: Default value is set to false.
#
StreamInputData = true
Note: This parameter is ignored if Input.Type is set to Delimited.
#
# CodePageUsed
#
# Specifies the code page in use.
# Used with EBCDIC character set for ex. use "ibm500" for EBCDIC International
# https://docs.oracle.com/javase/7/docs/api/java/nio/charset/Charset.html
#
CodePageUsed = ibm500
Note: If no value is specified, by default, ASCII character set is used.
#
# FailureThreshold
#
# Specifies the number of errors after which the Bulk Utility aborts the
# detokenization operation.
# Valid values
# -1 = Detokenization continues irrespective of number of errors during the
# operation. This is the default value.
# 0 = Bulk utility aborts the operation on occurance of any error.
# Any positive value = Indicates the failure threshold, after which the Bulk
# Utility aborts the operation.
#
# Note: If no value or a negative value is specified, Bulk Utility will continue
# irrespective of number of errors.
#
FailureThreshold = -1
##############################################################################
# END
##############################################################################
Running CipherTrust Vaulted Tokenization Bulk Utility
Enter the following command to encrypt with CT-V Bulk Utility in a Windows environment:
java -cp SafeNetTokenService-8.12.3.000.jar com.safenet.token.migration.main detokenization.properties -d DSU=NAE_User1 DSP=qwerty12345 DBU=DB_User1 DBP=abcd1234
Reviewing the Output File
The output data file is saved at the same path mentioned in the detokenization.properties
file with the same name detoken.txt
. As specified in the output sequence parameter, only column 5 and 6 have been detokenized.
Here is the data from the output file:
1AnnelKochiFsingle9876123412348810Koch1@company.com2BuickEmmeiMsingle9390404090949044Emme1@company.com3FrankHardyMsingle9876123412349010fannk@company.com4MackiGowenMsingle9390384990389493Gowen@company.com