You don't want to use file_get_contents for web crawling
use curl: http://php.net/manual/en/book.curl.php
(PHP 4 >= 4.3.0, PHP 5, PHP 7)
file_get_contents — 将整个文件读入一个字符串
$filename
[, bool $use_include_path
= false
[, resource $context
[, int $offset
= -1
[, int $maxlen
]]]] ) : string
和 file() 一样,只除了
file_get_contents() 把文件读入一个字符串。将在参数
offset
所指定的位置开始读取长度为
maxlen
的内容。如果失败,file_get_contents()
将返回 FALSE
。
file_get_contents() 函数是用来将文件的内容读入到一个字符串中的首选方法。如果操作系统支持还会使用内存映射技术来增强性能。
Note:
如果要打开有特殊字符的 URL (比如说有空格),就需要使用 urlencode() 进行 URL 编码。
filename
要读取的文件的名称。
use_include_path
Note:
As of PHP 5 the
FILE_USE_INCLUDE_PATH
can be used to trigger include path search.
context
A valid context resource created with
stream_context_create().
如果你不需要自定义 context,可以用 NULL
来忽略。
offset
The offset where the reading starts on the original stream.
Seeking (offset
) is not supported with remote files.
Attempting to seek on non-local files may work with small offsets, but this
is unpredictable because it works on the buffered stream.
maxlen
Maximum length of data read. The default is to read until end of file is reached. Note that this parameter is applied to the stream processed by the filters.
The function returns the read data 或者在失败时返回 FALSE
.
An E_WARNING
level error is generated if either maxlength
is less than zero, or if seeking to the specified offset
in the stream fails.
Example #1 Get and output the source of the homepage of a website
<?php
$homepage = file_get_contents('http://www.example.com/');
echo $homepage;
?>
Example #2 Searching within the include_path
<?php
// <= PHP 5
$file = file_get_contents('./people.txt', true);
// > PHP 5
$file = file_get_contents('./people.txt', FILE_USE_INCLUDE_PATH);
?>
Example #3 Reading a section of a file
<?php
// Read 14 characters starting from the 21st character
$section = file_get_contents('./people.txt', NULL, NULL, 20, 14);
var_dump($section);
?>
以上例程的输出类似于:
string(14) "lle Bjori Ro"
Example #4 Using stream contexts
<?php
// Create a stream
$opts = array(
'http'=>array(
'method'=>"GET",
'header'=>"Accept-language: en\r\n" .
"Cookie: foo=bar\r\n"
)
);
$context = stream_context_create($opts);
// Open the file using the HTTP headers set above
$file = file_get_contents('http://www.example.com/', false, $context);
?>
版本 | 说明 |
---|---|
5.1.0 |
Added the offset and
maxlen parameters.
|
5.0.0 | Added context support. |
Note: 此函数可安全用于二进制对象。
如已启用fopen 包装器,在此函数中, URL 可作为文件名。关于如何指定文件名详见 fopen()。各种 wapper 的不同功能请参见 支持的协议和封装协议,注意其用法及其可提供的预定义变量。
使用 SSL 时,Microsoft IIS 会违反协议不发送close_notify标记就关闭连接。PHP 会在到达数据尾端时报告"SSL: Fatal Protocol Error"。 要解决此问题,error_reporting 应设定为降低级别至不包含警告。 PHP 4.3.7 及更高版本可以在使用 https:// 包装器打开流时检测出有问题的 IIS 服务器软件 并抑制警告。在使用 fsockopen() 创建 ssl:// 套接字时, 开发者需检测并抑制此警告。
You don't want to use file_get_contents for web crawling
use curl: http://php.net/manual/en/book.curl.php
A simple way to parse side effect array $http_response_header into a better assoc format.
<?php
/**
* http_parse_response_header()
* Parse $http_response_header produced by file_get_contents().
*
* @param array $header
* Supposed $http_response_header or array alike.
* @param array
* Assoc array of the parsed version.
*/
function http_parse_response_header($header) {
if (empty($header)) return []; // return empty array
// parse status line
$status_line = array_shift($header);
if (!preg_match('/^(\w+)\/(\d+\.\d+) (\d+) (.+?)$/', $status_line, $matches))
throw new Exception("misformat status line: {$status_line}");
return [
'PROTOCOL' => $matches[1],
'PROTOCOL_VERSION' => $matches[2],
'STATUS_CODE' => $matches[3],
'STATUS' => $matches[4],
] + array_reduce($header, function ($carry, $line) {
// parse content line
list($key, $value) = explode(':', $line, 2);
if (!isset($carry[$key])) {
$carry[$key] = trim($value);
} else {
$carry[$key] .= "\n" . trim($value);
}
return $carry;
}, []);
}
?>
Also added to Github Gist:
https://gist.github.com/yookoala/017f056c34a169514fbf0cd7cc5b2e78
file_get_contents does not normally respect PHP's flock locking, i.e. advisory locking.
You can workaround this with some extra code to request a shared lock, like...
<?php
$tmp = fopen($path, 'rb');
@flock($tmp, LOCK_SH);
$contents = file_get_contents($path);
@flock($tmp, LOCK_UN);
fclose($tmp);
?>
Keep in mind that if you use a URL as the filename attribute, and the external resource is not reachable, the function will not return FALSE but instead an exception will be thrown.
So, in this case, instead of doing this:
$content = file_get_contents('https://en.wikipedia.org/wiki/Cat#/media/File:Large_Siamese_cat_tosses_a_mouse.jpg');
if ($content === false) {
// Handle the error
}
Do this:
try {
$content = file_get_contents('https://en.wikipedia.org/wiki/Cat#/media/File:Large_Siamese_cat_tosses_a_mouse.jpg');
if ($content === false) {
// Handle the error
}
} catch (Exception $e) {
// Handle exception
}
The funniest thing there is that seeking on non local files may, or may not work. This is unpredictable, and thus should throw rather than doing some magical stuff.
Also trying to read non local file which doesn't exists results in FALSE returned and no single warning emitted.
It is important to write the method in capital letters like "GET" or "POST" and not "get" or "post". Some servers can respond a 400 error if you do not use caps in the method.
If you are using file_get_contents() function to retrieve HTTP url and printing HTTP content, you can also send original content-type header using $http_response_header and header() function;
<?php
foreach ($http_response_header as $value) {
if (preg_match('/^Content-Type:/i', $value)) {
// Successful match
header($value,false);
}
}
?>
Reading all script input is simple task with file_get_contents, but it depends on what SAPI is being used.
Only in Apache, not in CLI:
<?php
$input = file_get_contents("php://input");
?>
Only in CLI, not in Apache:
<?php
$input = file_get_contents("php://stdin");
?>
In Apache php://stdin will be empty, in CLI php://input will be empyt instead with no error indication.
Negative offsets don't work as you might expect (like in http://php.net/substr for example)
So
<?php echo file_get_contents(__FILE__, false, null, -10) ?>
does the same as
<?php echo file_get_contents(__FILE__, false, null, 0) ?>
To get the last 10 characters of a file, you need to use
<?php echo file_get_contents (__FILE__, false, null, (filesize (__FILE__) - 10)) ?>
This is an easy way to trigger scripts by listening for POSTs. I simply point a service's webhook url to the script, which file_get_contents("php://input"), cast to an array, and then simplexml_load_string() to parse it and use one of the keys' data as the parameter for my script.
For those who use file_get_contents for JSON or other RESTful services - like my architecture did for a big site - this will probably help a lot.
We struggled with having the site using get urls that would go through our load balancer instead of hitting the local server.
What we did was load this function through a local url and set the Host: header for our virtualhost entries on the site we wanted to laod.
This code solved our issue:
<?php
//set the header context stream for virtualhost lookup
$context = stream_context_create(array('http' => array('header' => 'Host: www.VIRTUALHOSTDOMAIN.com')));
//use a localhost url or alternatively 127.0.0.1 ip
$url = 'http://localhost/rest-service/?get-user&id=######';
//fetch the data through webserver using the Host http header we set
$data = json_decode(file_get_contents($url, 0, $context));
//verify you have your data
var_dump($data);
?>
file_get_contents can do a POST, create a context for that first:
$opts = array('http' =>
array(
'method' => 'POST',
'header' => "Content-Type: text/xml\r\n".
"Authorization: Basic ".base64_encode("$https_user:$https_password")."\r\n",
'content' => $body,
'timeout' => 60
)
);
$context = stream_context_create($opts);
$url = 'https://'.$https_server;
$result = file_get_contents($url, false, $context, -1, 40000);
For those having this problem when trying to get_file_contents(url):
Warning: file_get_contents(url): failed to open stream: HTTP request failed! in xx on line yy
If you are behind a SonicWall firewall, read this:
https://bugs.php.net/bug.php?id=40197
(this little line: uncheck a box in the internal settings of the firewall labled "Enforce Host Tag Search with for CFS")
Apparently by default SonicWall blocks any HTTP request without a "Host:" header, which is the case in the PHP get_file_contents(url) implementation.
This is why, if you try to get the same URL from the same machine with cURL our wget, it works.
I hope this will be useful to someone, it took me hours to find out :)
I experienced a problem in using hostnames instead straight IP with some server destinations.
If i use file_get_contents("www.jbossServer.example/app1",...)
i will get an 'Invalid hostname' from the server i'm calling.
This is because file_get_contents probably will rewrite your request after getting the IP, obtaining the same thing as :
file_get_contents("xxx.yyy.www.zzz/app1",...)
And you know that many servers will deny you access if you go through IP addressing in the request.
With cURL this problem doesn't exists. It resolves the hostname leaving the request as you set it, so the server is not rude in response.
If you want to insert tracking-scripts into your shopping-system, some scripts doesn't support intelligent detection of HTTPS, so i made a script i put on the server that rewrites 'http' to 'https' in the script, assuming everything has to be UTF-8 encoded (as a fallback it makes a redirect).
It is important that the HTTPS-source DOES exist!
<?php
function file_get_contents_utf8($fn) {
$opts = array(
'http' => array(
'method'=>"GET",
'header'=>"Content-Type: text/html; CHARSET=gb2312"
)
);
$context = stream_context_create($opts);
$result = @file_get_contents($fn,false,$context);
return $result;
}
header("Content-Type: text/html; CHARSET=gb2312");
$tPath = "URL YOU WANT TO MODIFY";
$result = file_get_contents_utf8("http://".$tPath);
if( $result == false){
header("Location: https://".$tPath); // fallback
exit();
}
else{
echo mb_ereg_replace("http","https",$result);
}
?>
At least as of PHP 5.3, file_get_contents no longer uses memory mapping.
See comments on this bug report:
http://bugs.php.net/bug.php?id=52802
here is another (maybe the easiest) way of doing POST http requests from php using its built-in capabilities. feel free to add the headers you need (notably the Host: header) to further customize the request.
note: this method does not allow file uploads. if you want to upload a file with your request you will need to modify the context parameters to provide multipart/form-data encoding (check out http://www.php.net/manual/en/context.http.php ) and build the $data_url following the guidelines on http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.2
<?php
/**
make an http POST request and return the response content and headers
@param string $url url of the requested script
@param array $data hash array of request variables
@return returns a hash array with response content and headers in the following form:
array ('content'=>'<html></html>'
, 'headers'=>array ('HTTP/1.1 200 OK', 'Connection: close', ...)
)
*/
function http_post ($url, $data)
{
$data_url = http_build_query ($data);
$data_len = strlen ($data_url);
return array ('content'=>file_get_contents ($url, false, stream_context_create (array ('http'=>array ('method'=>'POST'
, 'header'=>"Connection: close\r\nContent-Length: $data_len\r\n"
, 'content'=>$data_url
))))
, 'headers'=>$http_response_header
);
}
?>
read text per line and convert to array
for example, the input file is input.txt
the input file containt text below
one
two
three
four
five
++++++++++++++++++++++++++
read value per line
<?php
$data = file_get_contents("input.txt"); //read the file
$convert = explode("\n", $data); //create array separate by new line
for ($i=0;$i<count($convert);$i++)
{
echo $convert[$i].', '; //write value by index
}
?>
++++++++++++++++++++++++++
Output :
one, two, three, four, five,
When using a URI with a login / password (HTTP or FTP, for an example), you may need to urlencode the password if it contains special characters.
Do not urlencode the whole URI, just the password.
Don't do :
urlencode('ftp://login:mdp%?special@host/dir/file')
Do :
'ftp://login:' . urlencode('mdp%?special') . '@host/dir/file';
Might seem obvious, but is worth noting.
If working file is bigger than 64kb and you getting deadlock. Your buffer is overflow. Here are two way how to avoid that.
1) use temporary file for descriptor
<?php
$descriptorspec = array(
0 => array("file", "/tmp/ens/a.ens","r"), // stdin is a pipe that the child will read from
1 => array("file", "/tmp/ens/a.html","w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/ens/error-output.txt", "a") // stderr is a file to write to
);
?>
2) inline read using stream_set_blocking. PHP doesn't proper handle last part of file.
<?php
$READ_LEN = 64*1024;
$MAX_BUF_LEN = 2*$READ_LEN;
$url = "http://some.domain.com:5984/".$db."/".$member."/contents";
$src = fopen($url,"r");
$cwd = '/tmp';
$cmd['enscript'] = "/usr/bin/enscript";
$cmd['enscript-options'] = " -q --language=html --color -Ejcl -o -";
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w") // stdout is a pipe that the child will write to
);
$ph=proc_open($cmd['enscript']." ".$cmd['enscript-options'],$descriptorspec,$pipes,$cwd);
stream_set_blocking($src,0);
stream_set_blocking($pipes[0],0);
stream_set_blocking($pipes[1],0);
$CMD_OUT_OPEN = TRUE; $k = 0;
while (!feof($pipes[1]) || !feof($src) || $k > 0) {
if (!feof($src) && $k+$READ_LEN <= $MAX_BUF_LEN) {
$input .= fread($src,$READ_LEN);
$k = strlen($input);
}
if ($k > 0) {
$l = fwrite($pipes[0],$input);
$k -= $l;
$input = substr($input,$l);
}
if ($CMD_OUT_OPEN && $k == 0 && feof($src)) {
fclose($pipes[0]);
$CMD_OUT_OPEN = FALSE;
}
$output = fread($pipes[1],$READ_LEN);
$outputn = str_replace("<H1>(stdin)</H1>","",$output);
echo $outputn;
}
fclose($pipes[1]);
$return_value = proc_close($ph);
?>
The offset is 0 based. Setting it to 1 will skip the first character of the stream.
If your file_get_contents freezes during several seconds, here is maybe your answer:
Beware that the default keepalive timeout of Apache 2.0 httpd is 15 seconds. This is true for HTTP/1.1 connections, which is not the default behavior of file_get_contents but you can force it, especially if you are trying to act as a web browser. I don't know if this is also the case for HTTP/1.0 connections.
Forcing the server to close the connection would make you gain those 15 seconds in your script:
<?php
$context = stream_context_create(array('http' => array('header'=>'Connection: close')));
$content = file_get_contents("http://www.example.com/test.html");
?>
Another way of resolving slowness issues is to use cURL or fsockopen. Bear in mind that contrary to the behavior of web browsers, file_get_contents doesn't return the result when the web page is fully downloaded (i.e. HTTP payload length = value of the response HTTP "Content-Length" header) but when the TCP connection is closed.
I hope this behavior will change in future releases of PHP.
This has been experienced with PHP 5.3.3.
If you want to check if the function returned error, in case of a HTTP request an, it's not sufficient to test it against false. It may happen the return for that HTTP request was empty. In this case it's better to check if the return value is a bool.
<?php
$result=file_get_contents("http://www.example.com");
if ($result === false)
{
// treat error
} else {
// handle good case
}
?>
[EDIT BY thiago: Has enhacements from an anonymous user]
Sometimes you might get an error opening an http URL.
even though you have set "allow_url_fopen = On" in php.ini
For me the the solution was to also set "user_agent" to something.
On Centos 5, and maybe other Red Hat based systems, any attempt to use file_get_contents to access a URL on an http port other than 80 (e.g. "http://www.example.com:8040/page") may fail with a permissions violation (error 13) unless the box you are running php on has its seLinux set to 'permissive' not 'enforcing' . Otherwise the request doesn't even get out of the box, i.e. the permissions violation is generated locally by seLinux.
In my dev environment with a relatively low-speed drive (standard SATA 7200RPM) reading a 25MB zip file in 10 times...
<?php
$data = `cat /tmp/test.zip`;
// 1.05 seconds
$fh = fopen('/tmp/test.zip', 'r');
$data = fread($fh, filesize('/tmp/test.zip'));
fclose($fh);
// 1.31 seconds
$data = file_get_contents('/tmp/test.zip');
// 1.33 seconds
?>
However, on a 21k text file running 100 iterations...
<?php
$data = `cat /tmp/test.txt`;
// 1.98 seconds
$fh = fopen('/tmp/test.txt', 'r');
$data = fread($fh, filesize('/tmp/test.txt'));
fclose($fh);
// 0.00082 seconds
$data = file_get_contents('/tmp/test.txt');
// 0.0069 seconds
?>
Despite the comment about file_get_contents being faster do to memory mapping, file_get_contents is slowest in both of the above examples. If you need the best performance out of your production box, you might want to throw together a script to check out which method is fastest for what size files on that particular machine, then optimize your code to check the file size and use the appropriate function for it.
A UTF-8 issue I've encountered is that of reading a URL with a non-UTF-8 encoding that is later displayed improperly since file_get_contents() related to it as UTF-8. This small function should show you how to address this issue:
<?php
function file_get_contents_utf8($fn) {
$content = file_get_contents($fn);
return mb_convert_encoding($content, 'UTF-8',
mb_detect_encoding($content, 'UTF-8, ISO-8859-1', true));
}
?>
Setting the timeout properly without messing with ini values:
<?php
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 1
)
)
);
file_get_contents("http://example.com/", 0, $ctx);
?>
This is a nice and simple substitute to get_file_contents() using curl, it returns FALSE if $contents is empty.
<?php
function curl_get_file_contents($URL)
{
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, $URL);
$contents = curl_exec($c);
curl_close($c);
if ($contents) return $contents;
else return FALSE;
}
?>
Hope this help, if there is something wrong or something you don't understand let me know :)
I decided to make a similar function to this, called file_post_contents, it uses POST instead of GET to call, kinda handy...
<?php
function file_post_contents($url,$headers=false) {
$url = parse_url($url);
if (!isset($url['port'])) {
if ($url['scheme'] == 'http') { $url['port']=80; }
elseif ($url['scheme'] == 'https') { $url['port']=443; }
}
$url['query']=isset($url['query'])?$url['query']:'';
$url['protocol']=$url['scheme'].'://';
$eol="\r\n";
$headers = "POST ".$url['protocol'].$url['host'].$url['path']." HTTP/1.0".$eol.
"Host: ".$url['host'].$eol.
"Referer: ".$url['protocol'].$url['host'].$url['path'].$eol.
"Content-Type: application/x-www-form-urlencoded".$eol.
"Content-Length: ".strlen($url['query']).$eol.
$eol.$url['query'];
$fp = fsockopen($url['host'], $url['port'], $errno, $errstr, 30);
if($fp) {
fputs($fp, $headers);
$result = '';
while(!feof($fp)) { $result .= fgets($fp, 128); }
fclose($fp);
if (!$headers) {
//removes headers
$pattern="/^.*\r\n\r\n/s";
$result=preg_replace($pattern,'',$result);
}
return $result;
}
}
?>
Seems file looks for the file inside the current working (executing) directory before looking in the include path, even with the FILE_USE_INCLUDE_PATH flag specified.
Same behavior as include actually.
By the way I feel the doc is not entirely clear on the exact order of inclusion (see include). It seems to say the include_path is the first location to be searched, but I have come across at least one case where the directory containing the file including was actually the first to be searched.
Drat.
If you're having problems with binary and hex data:
I had a problem when trying to read information from a ttf, which is primarily hex data. A binary-safe file read automatically replaces byte values with their corresponding ASCII characters, so I thought that I could use the binary string when I needed readable ASCII strings, and bin2hex() when I needed hex strings.
However, this became a problem when I tried to pass those ASCII strings into other functions (namely gd functions). var_dump showed that a 5-character string contained 10 characters, but they weren't visible. A binary-to-"normal" string conversion function didn't seem to exist and I didn't want to have to convert every single character in hex using chr().
I used unpack with "c*" as the format flag to see what was going on, and found that every other character was null data (ordinal 0). To solve it, I just did
str_replace(chr(0), "", $string);
which did the trick.
This took forever to figure out so I hope this helps people reading from hex data!
[Editors note: As of PHP 5.2.1 you can specify `timeout` context option and pass the context to file_get_contents()]
The only way I could get get_file_contents() to wait for a very slow http request was to set the socket timeout as follows.
ini_set('default_socket_timeout', 120);
$a = file_get_contents("http://abcxyz.com");
Other times like execution time and input time had no effect.
Use the previous example if you want to request the server for a special part of the content, IF and only if the server accepts the method.
If you want a simple example to ask the server for all the content, but only save a portion of it, do it this way:
<?php
$content=file_get_contents("http://www.google.com",FALSE,NULL,0,20);
echo $content;
?>
This will echo the 20 first bytes of the google.com source code.
the bug #36857 was fixed.
http://bugs.php.net/36857
Now you may use this code,to fetch the partial content like this:
<?php
$context=array('http' => array ('header'=> 'Range: bytes=1024-', ),);
$xcontext = stream_context_create($context);
$str=file_get_contents("http://www.fcicq.net/wp/",FALSE,$xcontext);
?>
that's all.
If, like me, you are on a Microsoft network with ISA server and require NTLM authentication, certain applications will not get out of the network. SETI@Home Classic and PHP are just 2 of them.
The workaround is fairly simple.
First you need to use an NTLM Authentication Proxy Server. There is one written in Python and is available from http://apserver.sourceforge.net/. You will need Python from http://www.python.org/.
Both sites include excellent documentation.
Python works a bit like PHP. Human readable code is handled without having to produce a compiled version. You DO have the opportunity of compiling the code (from a .py file to a .pyc file).
Once compiled, I installed this as a service (instsrv and srvany - parts of the Windows Resource Kit), so when the server is turned on (not logged in), the Python based NTLM Authentication Proxy Server is running.
Then, and here is the bit I'm really interested in, you need to tell PHP you intend to route http/ftp requests through the NTLM APS.
To do this, you use contexts.
Here is an example.
<?php
// Define a context for HTTP.
$aContext = array(
'http' => array(
'proxy' => 'tcp://127.0.0.1:8080', // This needs to be the server and the port of the NTLM Authentication Proxy Server.
'request_fulluri' => True,
),
);
$cxContext = stream_context_create($aContext);
// Now all file stream functions can use this context.
$sFile = file_get_contents("http://www.php.net", False, $cxContext);
echo $sFile;
?>
Hopefully this helps SOMEONE!!!
This functionality is now implemented in the PEAR package PHP_Compat.
More information about using this function without upgrading your version of PHP can be found on the below link:
http://pear.php.net/package/PHP_Compat