Does anyone have experience connecting Workbench to S3? I'm having trouble figuring out where to put the credentials etc to be able to pull the files.
That makes sense why I was confused, I thought you might have something custom I wasn't aware of. how it worked. Thanks for the update.
I haven't attempted this before but just created an S3 bucket to test and got it working without much trouble.
I created a bucket called test-bucket18 and put the file into a folder called Domo and named the file Test_Domo_S3.csv
When creating it you have to use the keys you get when setting up a user (access + secret). Here are the settings in Domo.
ACCESS KEY: QEWRWGWLKGLWDKFGLKWD
SECRET KEY: ASLKDA:KSDLASKDLASKD
S3 BUCKET REGION: This one is a little odd because S3 doesn't require a specific region, use-east-2 didn't seem to work as it told me it had an issue with the "how would you liek to choose your filename" part so I switched it to us-east-1 and it was fine
WHAT FILE TYPE WOULD YOU LIKE TO IMPORT: CSV
HOW WOULD YOU LIKE TO CHOOSE YOUR FILENAME? CompleteFileName
ENTER COMPLETE FILEPATH: Domo/Test_Domo_S3.csv
FILE COMPRESSION TYPE: None
ZIP FILE ENCODING: Default, UTF-8 though it's N/A since it's not compressed.
ARE HEADERS PRESENT IN CSV FILE: Yes
SELECT DELIMITING CHARACTER: Comma (,)
QUOTE CHARACTER: Default, Double Quote (")
ESCAPE CHARACTER: Default, No Escape character
ADD FILENAME COLUMN: Default, No
FILE ENCODING: Default, UTF-8
Thanks - it looks like you're using the S3 connector in the platform vs using the workbench tool, is that correct?
I've been able to use the S3 connector in the platform, I was wondering more specifically about connecting via workbench.
Clearly it's a friday and my brain decided to skip over words, my bad. I'll see if I can do a test for workbench
Can you give a brief explanation of your use case? I feel like I'm missing something with what you're trying to do and why.
I'm trying to explore the use of workbench to pull s3 files because of certain limitations of the s3 connector when applied to my use case.
The limitation I find in the S3 connector is when you are not pulling in a specific named file, you are limited to the partial file name, and then it pulls in whatever the most recent file is with the partial file name. The way I am receiving these logs is a copy process from another bucket and there's not a good way for me to anticipate 1. The number of files that will be generated ( the files are segmented into 2 MM rows, depending on the traffic that could be more or less on a given day) 2. Which files will be copied at the most recent time. Additionally, the files live in a folder that is dt=whatever the date may be. I can't dynamically define which day the S3 connector in the platform should pick up in that process. I was hoping to explore a way around that in a query in workbench that allows me to define the constraints beyond file name.
I've been told by my AM that workbench isn't intended for configuration with cloud services, more local files,e tc. Good to know. Thanks for the replies.