Hi <#C29HQS63G|>, In a Map/Reduce script, I'm try...
# suitescript
a
Hi #C29HQS63G, In a Map/Reduce script, I'm trying to create a file in plain text format with data separated by a pipe (
|
) delimiter. The script fetches details from a saved search, prepares the file, and saves it in a folder. The saved search contains around 50,000 results. After creating the file and comparing it with the saved search export, the data does not match (amounts and lines are getting jumbled). However, when I tested with a smaller dataset of about 2,000 results, the created file matched the saved search export correctly. Here is the code I used. Can anyone please let me know why the data is not fetching/matching correctly with the saved search in the code below? Has anyone faced this type of issue.
// Latest Back Up Code of Board File - 09/15/2025
/**
* @NApiVersion 2.1
* @NScriptType MapReduceScript
*/
define(['N/search', 'N/file', 'N/log'], (search, file, log) => {
const SAVED_SEARCH_ID = '15833';
const OUTPUT_FILE_NAME = 'Transactions_Aug_2025_pipe_delimited_1.txt';
const OUTPUT_FOLDER_ID = 1178218;
const getInputData = () => {
const savedSearch = search.load({ id: SAVED_SEARCH_ID });
const pagedData = savedSearch.runPaged({ pageSize: 1000 }); // 1000
return pagedData.pageRanges;
};
const map = (context) => {
const pageRange = JSON.parse(context.value);
const savedSearch = search.load({ id: SAVED_SEARCH_ID });
const pagedData = savedSearch.runPaged({ pageSize: 1000 });
const page = pagedData.fetch({ index: pageRange.index });
page.data.forEach((result) => {
const row = result.columns.map((col) => {
let value = result.getText(col) || result.getValue(col) || '';
value = value.toString()
.replace(/\r?\n|\r/g, ' ')
.replace(/\|/g, '\\|');
return value;
}).join('|');
context.write({ key: 'results', value: row });
});
};
const reduce = (context) => {
const partialContent = context.values.join('\n');
context.write({ key: 'output', value: partialContent });
};
const summarize = (summary) => {
const savedSearch = search.load({ id: SAVED_SEARCH_ID });
const columns = savedSearch.columns;
const header = columns.map((col) => {
let label = col.label || col.name;
label = label.replace(/\|/g, '\\|');
return label;
}).join('|');
let fullContent = header + '\n';
summary.output.iterator().each((key, value) => {
fullContent += value + '\n';
return true;
});
const fileObj = file.create({
name: OUTPUT_FILE_NAME,
fileType: file.Type.PLAINTEXT,
contents: fullContent,
folder: OUTPUT_FOLDER_ID
});
const fileId = fileObj.save();
log.audit('File Created', Pipe-delimited file created with ID: ${fileId});
if (summary.inputSummary.error) {
log.error('Input Error', summary.inputSummary.error);
}
summary.mapSummary.errors.iterator().each((key, error) => {
log.error(Map Error for key: ${key}, error);
return true;
});
summary.reduceSummary.errors.iterator().each((key, error) => {
log.error(Reduce Error for key: ${key}, error);
return true;
});
};
return { getInputData, map, reduce, summarize };
});
a
map reduce scripts use concurrency, you're triggering 50 map stages with your 50 pages of search results - but they're writing to the reduce context in whatever order they finish not the order of the search results
the 2000 results one worked because i guess there's only 2 pages and you had a 50:50 chance of which would finish first
can you parse the line data in the reduce stage and reimplement the correct order? before writing it to the summary?
m
You also might have better luck writing a CSV file (perhaps using the CSV export task programmatically, like @alien4u has reference below), and then using something like PapaParse to read the CSV, and then write it using | as a delimiter.
(Though that might take more time then simply adjusting the script like Anthony said)
a
task.SearchTask Native, async, no parsing issues...
Copy code
//Add additional code ...
var searchTask = task.create({
    taskType: task.TaskType.SEARCH
});
searchTask.savedSearchId = 51;
var path = 'ExportFolder/export.csv';
searchTask.filePath = path;
var searchTaskId = searchTask.submit();
👀 1
a
Thank you so much for the suggestions. I also need to add logic to split the files into multiple parts based on size. Each file should not exceed 10 MB, since the server we’re uploading to only supports files up to 10 MB. If I use
task.create
, how can I split the data into multiple files?
a
there's an option to add a dependent script that will run after the searchTask
call that and pass in the file id of the file you just created as a script parameter
Copy code
// create and name the empty file first
   const fileObj = file.create({
     name: filename,
     fileType: file.Type.CSV,
     folder: targetFolderId,
  });
  
  const fileId = fileObj.save();

  //pass in the fileId to the search task
  const searchTask = task.create({
      taskType: task.TaskType.SEARCH,
      savedSearchId: savedSearchId,
      fileId: fileId,
  });

    // create your script task
    const scheduledScriptTask = task.create({
        taskType: task.TaskType.SCHEDULED_SCRIPT
    });
    
    // set the script options on the task
    scheduledScriptTask.scriptId = dependentScript;
    scheduledScriptTask.deploymentId = dependentDeploy;
    scheduledScriptTask.params = { [dependentParam] : fileId };
    
    // add the dependency to the search task
    searchTask.addInboundDependency(scheduledScriptTask);

    //submit search task
    const taskId = searchTask.submit();
    log.debug("task id", taskId);
👍 2
once the search finishes and generates the file it will automatically trigger your scheduled script (or you can use a MR too i guess) and it will pass in the fileId of the .csv that was just created then you can check the size that file, and the lineCount or w/e and create new files from that original exported file (note you can't modify the contents of the file with suitescript, you can only read them and them write them to a new file)
so usually you have to delete the original when you're done to avoid confusion
a
Hi @Anthony OConnor, Thank you for checking. I will try the suggested one. Could you please share any reference or sample code for the dependence that would be helpful to me in developing this with the new methodology?