
PRO
dbeard
USA
Asked
— Edited

I have been able to get and save the attached file to my hard drive. I can read it but I can not figure out how to pull just the data I want. In this case it is a stock quote for VZ. I want to just pull out the symbol and stock price.
I would appreciate any examples of how to accomplish that.
Thank You
Code:
If you look at the output file you will see that the search terms like name: and Absolute_magnitude_h: are now (without filtering) name : and absolute_magnitude_h : with a space before the colon which the filter program must have previously taken out. So you would need to change those search words, as well as the others, to include the space for it to work properly. That's why they are not being found currently.
EDIT: Ah, I see dbeard, you have found the problem while I was checking things out and creating this reply.
I have got the script working after fixing a lot of the parsing.
In what way is the Scrub program not working?
The Scrub program does some pre-emptive character replacement in addition to the left paren removal. It also deletes double quotes and changes E to e. It gets rid of quotes because strings in the Scripts are bounded by them. So any in a string read from a web page will cause a crash or truncated string. This isn't a bug though, just a fact of life. Lower casing E prevents the problem when an uppercase E is followed directly by a number. The script interpreter will try to treat it as an E-notation value and crash in so doing. Also a backslash can sometimes cause problems so they are removed, as well as the semi-colon.
As your project currently stands it seems to be free from any of those problems, given the web page you are using.
I have a new version of the scrub program which is more versatile than what you are currently using should you be interested. It is used pretty much like you are using what you have now. Switching to it would not be too difficult. If you post your project to the cloud I could download it and modify it to use the new stuff, then upload it back.
After giving it more thought, and the fact that DJ will soon be fixing the quote problem and the capital E problem, I'm thinking you will be better off without the Scrub program at all. Just do things like DJ last showed and let it go at that. Your only real problem was with the left paren anyway. At least with the script you posted. I see you have other scripts for other web pages as well. I haven't checked them out.
The only other thing the Scrub program can do for you is to limit the amount of text gotten from the web page, That can be helpful in that you can avoid having to wade through many lines of text to get to the text with the information you want. That can speed up response time. If there was an instruction to set the file pointer to a specific position in the file, that too could, effectively, be done in the script.
Of course, it can still provide substitutions, exchanging a character or group of characters to another character or group of characters. That includes non-printable characters. But I think now most, if not all, substitutions can be made in the script instead. Though there is still the problem of LF or CR characters crashing the script if loaded into a variable. That would happen if you decided to load the entire web page at once into a single variable using the FileReadAll script instruction. Reading it line by line avoids that, however.
If it were a feature, it would be good if there were 2 parameters associated with the HTTPGet function. Right now the Scrub program can limit the data gathered from the web page in 2 ways. One is by supplying an integer specifying a start position from which to begin and another integer specifying how much to get starting at that point. If the amount to get is beyond the end of the web page text, it simply gets whatever text there is instead.
Alternatively, two strings can be specified. The Scrub program uses the first string as a search parameter to locate the start position that way, and the other to specify an end position to indicate where to end. Additionally, the search for the end position does not begin until after the point at which the start string was found in the page text (plus the length of the start string). However, if the start position string is not found, then the end string is ignored and the entire web page is returned instead. If the start string is found, but not the end string, then the web page text is gathered to the end of the web page. It also pops up error messages should either of those conditions occur. There is a way to suppress those error messages should the user desire not to have them appear.
The program allows for mixing the two methods as well. For example a string to find the start position and an integer to specify how much text to get from the point at which the start string was found.
So, basically, to do the same thing, the two parameters would have to be such that they can take either an integer or a string.
You can delete the project yourself by going to it in the cloud. A delete option should appear for you since you posted it.
Code:
*Note: that wont' work for you currently until the next ARC is updated because of the quote issue with current ARC. But it works on mine
Since we are on the topic of fixing errors in the script, I would like to take this opportunity to reiterate the fact that the script will also crash if the strings from a web page contain Line Feed or Carriage Return characters and the entire web page is loaded at once using FileReadAll. Correcting this would be helpful to folks who would like to work with the entire text from a web page at once from a single variable, as opposed to, reading the file line by line.
EDIT: Removed the request for optional parameters in FileRealAll since the SubString function will do that as well.
Thanks, once again, for all your efforts.