Scrapingdog
HomePricingSupportLogin
  • Documentation
  • Web Scraping API
    • Request Customization
      • Javascript Rendering
        • Wait when rendering Javascript
      • Custom Headers
      • Premium Residential Proxies
      • Geotargeting
      • Sessions
  • POST Request
  • Google Search Scraper API
    • Google Country Parameter: Supported Google Countries
    • Supported Google Countries via cr parameter
    • Google Domains Page
    • Google Language Page
    • Google LR Language Page
  • Google Maps API
    • Google Maps Posts API
    • Google Maps Photos API
    • Google Maps Reviews API
    • Google Maps Places API
  • Google Trends API
    • Google Trends Autocomplete API
    • Google Trends Trending Now API
  • Google Images API
  • Google News API
    • Google News API 2.0
  • Google Shopping API
  • Google Product API
  • Google Videos API
  • Google Shorts API
  • Google Autocomplete API
  • Google Scholar API
    • Google Scholar Profiles API
    • Google Scholar Author API
      • Google Scholar Author Citation API
    • Google Scholar Cite API
  • Google Finance API
  • Google Lens API
  • Google Jobs API
  • Google Local API
  • Google Patents API
    • Google Patent Details API
  • Bing Search Scraper API
  • Amazon Scraper API
    • Amazon Product Scraper
    • Amazon Search Scraper
    • Amazon Reviews API
    • Amazon Autocomplete Scraper
  • Instagram Scraper API
  • Linkedin Scraper API
    • Person Profile Scraper
    • Company Profile Scraper
  • Linkedin Jobs Scraper
    • Scrape Linkedin Jobs
    • Scrape LinkedIn Job Overview
  • Yelp Scraper API
  • Twitter Scraping API
    • X Scraping API 2.0
  • Indeed Scraper API
  • Zillow Scraper API
  • Youtube Scraper API
    • Youtube Search API
    • YouTube Transcripts API
    • YouTube Channel API
  • Walmart Scraper API
    • Walmart Product Scraper
    • Walmart Search Scraper
    • Walmart Reviews Scraper
  • Screenshot API
  • Webhook Integration
  • Datacenter Proxies
  • Account API
Powered by GitBook
On this page

Datacenter Proxies

Scrapingdog also provides a proxy method to use the web scraping API. It is just an alternative to the scraping API. The functionalities remain the same.

Any request to this proxy will be forwarded to the web scraping API.

Note- Remember to configure your code to not verify SSL and pass the target URL with http only.

Proxy Example

curl -x "http://scrapingdog:652c6647e4921e35dab690bc@proxy.scrapingdog.com:8081" -k "https://httpbin.org/ip"
import requests

# Define the proxy URL with credentials
proxy_url = "http://scrapingdog:652c6647e4921e35dab690bc@proxy.scrapingdog.com:8081"

# Target URL
target_url = "https://httpbin.org/ip"

# Set up the proxy for the request
proxies = {
    "http": proxy_url,
    "https": proxy_url,
}

# Make the GET request
response = requests.get(target_url, proxies=proxies, verify=False)

# Print the response content
print(response.text)
const axios = require('axios');

const config = {
  method: 'get',
  url: 'https://httpbin.org/ip',
  proxy: {
    host: 'proxy.scrapingdog.com',
    port: 8081,
    auth: {
      username: 'scrapingdog',
      password: '652c6647e4921e35dab690bc',
    },
  },
};

axios(config)
  .then(response => {
    console.log(response.data);
  })
  .catch(error => {
    console.error(error);
  });
<?php
$scraping_url = "https://httpbin.org/ip";  // Your target URL

$ch = curl_init();

// Set the target URL
curl_setopt($ch, CURLOPT_URL, $scraping_url);

// Set the proxy server details
curl_setopt($ch, CURLOPT_PROXY, "http://scrapingdog:652c6647e4921e35dab690bc@proxy.scrapingdog.com:8081");

// Allow connections to SSL sites without certificates
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);

// Execute the cURL request
$response = curl_exec($ch);

// Check for cURL errors
if ($response === false) {
    echo 'cURL error: ' . curl_error($ch);
}

// Close cURL session
curl_close($ch);

// Output the response
echo $response;
?>
require 'httpclient'

# Your target URL
scraping_url = 'https://httpbin.org/ip'

client = HTTPClient.new

# Set the proxy server details
client.set_proxy('http://scrapingdog:652c6647e4921e35dab690bc@proxy.scrapingdog.com:8081')

# Send a GET request
response = client.get(scraping_url)

# Output the response
puts response.body
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;

public class CurlToJava {
    public static void main(String[] args) {
        try {
            // Your target URL
            String scrapingUrl = "https://httpbin.org/ip";

            // Create a proxy
            Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("proxy.scrapingdog.com", 8081));

            // Open a connection using the proxy
            HttpURLConnection connection = (HttpURLConnection) new URL(scrapingUrl).openConnection(proxy);

            // Set the proxy authentication if required
            String userPass = "scrapingdog:652c6647e4921e35dab690bc";
            String basicAuth = "Basic " + java.util.Base64.getEncoder().encodeToString(userPass.getBytes());
            connection.setRequestProperty("Proxy-Authorization", basicAuth);

            // Set the request method (GET)
            connection.setRequestMethod("GET");

            // Get the response
            BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));
            String line;
            StringBuilder response = new StringBuilder();

            while ((line = reader.readLine()) != null) {
                response.append(line);
            }

            reader.close();

            // Output the response
            System.out.println(response.toString());
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
PreviousWebhook IntegrationNextAccount API

Last updated 1 year ago