
PRO
dbeard
USA
Asked
— Edited

I have been able to get and save the attached file to my hard drive. I can read it but I can not figure out how to pull just the data I want. In this case it is a stock quote for VZ. I want to just pull out the symbol and stock price.
I would appreciate any examples of how to accomplish that.
Thank You
String quotes are working as well for next release. Need a few more tests before i put it online
@dbeard After giving it more thought, and the fact that DJ will soon be fixing the quote problem and the capital E problem, I'm thinking you will be better off without the Scrub program at all. Just do things like DJ last showed and let it go at that. Your only real problem was with the left paren anyway. At least with the script you posted. I see you have other scripts for other web pages as well. I haven't checked them out.
The only other thing the Scrub program can do for you is to limit the amount of text gotten from the web page, That can be helpful in that you can avoid having to wade through many lines of text to get to the text with the information you want. That can speed up response time. If there was an instruction to set the file pointer to a specific position in the file, that too could, effectively, be done in the script.
Of course, it can still provide substitutions, exchanging a character or group of characters to another character or group of characters. That includes non-printable characters. But I think now most, if not all, substitutions can be made in the script instead. Though there is still the problem of LF or CR characters crashing the script if loaded into a variable. That would happen if you decided to load the entire web page at once into a single variable using the FileReadAll script instruction. Reading it line by line avoids that, however.
Is that a feature request for the httpget command? Is limit the amount of data?
WBS0001 - Ok, thanks for all the help. DJ, can you make the data limit something the user can decide, like an added parameter to the command. Sometimes good to get all the data and sometimes not. Also, DJ, could you delete my app from the cloud. please.
@DJSures If it were a feature, it would be good if there were 2 parameters associated with the HTTPGet function. Right now the Scrub program can limit the data gathered from the web page in 2 ways. One is by supplying an integer specifying a start position from which to begin and another integer specifying how much to get starting at that point. If the amount to get is beyond the end of the web page text, it simply gets whatever text there is instead.
Alternatively, two strings can be specified. The Scrub program uses the first string as a search parameter to locate the start position that way, and the other to specify an end position to indicate where to end. Additionally, the search for the end position does not begin until after the point at which the start string was found in the page text (plus the length of the start string). However, if the start position string is not found, then the end string is ignored and the entire web page is returned instead. If the start string is found, but not the end string, then the web page text is gathered to the end of the web page. It also pops up error messages should either of those conditions occur. There is a way to suppress those error messages should the user desire not to have them appear.
The program allows for mixing the two methods as well. For example a string to find the start position and an integer to specify how much text to get from the point at which the start string was found.
So, basically, to do the same thing, the two parameters would have to be such that they can take either an integer or a string.
@dbeard You can delete the project yourself by going to it in the cloud. A delete option should appear for you since you posted it.
Thanks! I don't think the parameters are necessary after reading your description. The EZ-Script SubString will do the same thing
*Note: that wont' work for you currently until the next ARC is updated because of the quote issue with current ARC. But it works on mine
@DJSures Since we are on the topic of fixing errors in the script, I would like to take this opportunity to reiterate the fact that the script will also crash if the strings from a web page contain Line Feed or Carriage Return characters and the entire web page is loaded at once using FileReadAll. Correcting this would be helpful to folks who would like to work with the entire text from a web page at once from a single variable, as opposed to, reading the file line by line.
EDIT: Removed the request for optional parameters in FileRealAll since the SubString function will do that as well.
Thanks, once again, for all your efforts.