Unable to get URL to convert to PDF

Hello,


I am trying to get a URL to be converted to PDF. So far I have been unable to do so. Attached is a screen shot of my code and the error message I get. My code is not the cleanest due to trying everything except the kitchen sink. ;0) I have the 1.7 version of the Conversion.dll file.

My code is able to get the web page data and pass it onto the convert, but the convert will not convert it for some reason. In this case at line 83 the State of the convert is Failed and the errorMessage says “Specified method is not supported.” At line 80, that convert also failed but with the errorMessage of “This stream does not support seek operations.” I understand the seek issue, but still unable to get a URL to PDF going.

I tried changing the input FileType to Html5, but that causes an exception to be thrown: "Conversion from HTML5 to PDF is not supported. Or if I try to change the output to Doc or Docx I get the not supported exception as well.

That’s the problem in a nut shell. I have spent weeks on this with no resolution in site.

Hello,


We are sorry to hear that you have such issue. To be able to convert content which you get from URL to the pdf document you should save this content as an HTML file and then convert this file to the pdf.

Please check this ready to use (just change sample URL and file name to your data) code example:

// We will store the html response of the request here
string siteContent = string.Empty;

// The url you want to grab
string url = “put your url in here”;

// Here we’re creating our request, we haven’t actually sent the request to the site yet…
// we’re simply building our HTTP request to shoot off to google…
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.AutomaticDecompression = DecompressionMethods.GZip;
// Wrap everything that can be disposed in using blocks…
// They dispose of objects and prevent them from lying around in memory…
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) // Go query google
using (Stream responseStream = response.GetResponseStream()) // Load the response stream
using (StreamReader streamReader = new StreamReader(responseStream)) // Load the stream reader to read the response
{
siteContent = streamReader.ReadToEnd(); // Read the entire response and store it in the siteContent variable
}
// Write the stream contents to a new file named “AllTxtFiles.txt”.
StreamWriter outfile = new StreamWriter(Server.MapPath(@“App_Data/test1.html”));
outfile.Write(siteContent);
var conversion = GroupdocsConversion.Instance();
// prepare desired output file name with Path
var outputFile = “converted\SampleConverted.pdf”;
// converting and downloading result
var convertResult = conversion.Convert(“test1.html”, outputFile, FileType.Pdf);
if (convertResult.State == ConversionState.Completed)
{
Download(convertResult.ConvertedFileName);
}
else
{
if (convertResult.State == ConversionState.Failed)
{
ClientScript.RegisterStartupScript(GetType(),“errorMessage”,“alert(‘Conversion failed: “+convertResult.ErrorMessage+”’);”,true);
}
}


As you can see from this code we get web content of the URL via StreamReader then save it’s content as a html file and simply use this html file for conversion.

This solution only works partly and will not work for what we need it to do. What about embedded images? They do not come over using this method. Also your code has some minor issues. You should use outfile.Flush(); after the write statement to make sure all the content is written to the stream before using. Also, saving the file to the local system and then converting it is not a preferred way of doing this for what we want to do.


Any other way of doing this?

Hello,


Thank you for coming back. Sorry but at the current time the solution which we suggested in the previous post is only way to deal with the URLs.

We will investigate if it possible to add such functional.