Merge commit 'bfd134096e'
Resolve resource leak in Protocols.HTTP.Promise.
Most visible when using https:// URLs to servers that use Keep-Alive,
this resource leak results in an accumulation of defunct file descriptors
for sockets that should have been closed but weren't. The solution is in
1) After a Request sends its response, dispose of itself, so the connection
is returned to the pool
2) In the mini-Session used for Protocols.HTTP.Promise.get_url and friends,
disable connection reuse.
There appears to be a reference loop somewhere in the Session that prevents
its disposal. Still investigating.
Merge remote-tracking branch 'origin/master' into new_utf8
Merge remote-tracking branch 'origin/8.1' into gobject-introspection
HTTP.Promise: Properly propagate extra_callback_arguments.
Merge remote-tracking branch 'origin/8.1' into peter/travis
Promise: Replace Promise with Promise2, slightly updated interface.
Most notable differences between HTTP.Promise2 and HTTP.Promise:
- Less filling (20% smaller compiled object file).
- Instead of two result objects, we simply have a single
HTTP.Promise.Result object which is passed both on_success()
and on_failure(). Why does the original separate this in a
Promise.Success and Promise.Failure type?
- Various code optimisations, that do not change the interface.
- The Result object lacks the ok() method. What use was/is it?
You should normally already know if you are a success or a failure
- The Result object returns the raw body through "data", and the decoded
body through get() (to conform more to standard Future objects).
Merge branch 'grubba/rename_lfun_destroy' into 8.1
Modules: Fixed logts of warnings.
Testsuite: Updated for LFUN::_destruct().
Compiler: Don't complain about LFUN::destroy() in compat mode.
Fix multiple warnings.
Runtime: LFUN::destroy() has been renamed to _destruct().
Compiler: Rename LFUN::destroy() to LFUN::_destruct().
Modules: Fixed logts of warnings.
More fall out from the LFUN::destroy ==> LFUN::_destruct rename.
HTTP.Promise: async_data() dropped 'slow' data.
Added the content_type getter to Protocols.HTTP.Promise.Success and added automatic decoding of gzipped data. This means the explicit decoding in Web.SOAP isn't neccessary anymore.
Renamed `content_encoding() to `charset() since that is what's returned. Also fixed a bug where `content_type() wouldn't return anything if no charset is defined in the content-type header.
Fix refdoc typo.
Query: timed_async_fetch() didn't support chunked transfer encoding.
Promise: Some pikedoc fixes.
Protocols.HTTP.Promise: Added the member "extra_args" to the Arguments class.
If this is set it will then be passed on to the "Result" object being passed as argument to the on_success/on_failure callbacks.
Documentation [Protocols.HTTP]: Fixed typo.
No need to put dependency on Concurrent.Future as it in turn doesn't have any uncertain dependencies.
Just some Pike doc fixes.
Protocols.HTTP.Promise: The arguments are now passes as an object of class Arguments for better type cheking and more coherent method signatures for the request methods.
Also some refactoring.
Web.Api.Api: Using Protocols.HTTP.Promise for the requests if available and applicable.
Accidentally forgot to remove some debug defines in the previous commit.
Protocols.HTTP.Query: Fixed an old bug (https://bugzilla.roxen.com/bugzilla/show_bug.cgi?id=7676) where basically the timeout in timed_async_fetch wasn't reset each new data read.
This timeout had it's own property (data_timeout) which now is "deprecated" since the property "timeout" pretty much serves the same purpose, but on connection. So now timeout is the default value to use unless data_timeout is explicitly set, in which case that value will be used.
Since there was no proper way to set a max time for the entire operation (well, that was by accident and not intention how the data_timeout in timed_async_fetch worked) the new property "maxtime" has been added. If this is set (default is 0=indefinetly) the request will be aborted after maxtime second even if data is still being read.
So in short:
data_timeout = 0 // unless explicitly set
timeout = 120 // connection timeout, and then data read timeout
maxtime = 0 // 0 = off, otherwise the entire operation must be done within maxtime seconds or else the request is aborted
Protocols.HTTP.Session: Added some documentation
Protocols.HTTP.Promise: New module which utilise the new Concurrent.Promise/Future stuff for HTTP requests. Internally uses Protcols.HTTP.Session for the actual HTTP stuff.
Web.Api.Api: Now fetches data asynchronously when async calls are made.
Concurrent: on_success and on_fail now returns the object being called so that they can be chained.
Parser.Markdown: Fixed the #require macro directive.