Company Search API

The endpoint for the company search api is https://api.peopledatalabs.com/v5/company/search.

Company Search API Access and Billing.

The Company Search API is currently in beta with restricted access. Information on this page and API functionality is subject to change.

To access the search api reach out to us. Please report any issues or feedback to [email protected] and/or your Data Consultant.

We charge per record retrieved. Each company record in the "data" array of the response counts as a single "credit" against your total package.

Usage

PDL's Company Search API is perfect for finding specific segments of companies that you need in order to power your projects and products. This product gives you direct access to query our full Company dataset. There are many degrees of freedom which allow you to find any kind of company with a single query.

Requests

See Authentication and Requests to see possible ways to input requests. We recommend using a JSON object to capture request parameters and will do so in the examples below.

Rate Limiting

The current default rate limit is 10 requests per minute.

Input Parameters

Parameter NameDescriptionDefaultExample
queryAn Elasticsearch (v7.7) query. See our underlying Elasticsearch mapping for reference.{"query": {"term": {"name": "people data labs"}}}
sqlA SQL query of the format: SELECT * FROM company WHERE XXX, where XXX is a standard SQL boolean query involving PDL fields. Any use of column selections or the LIMIT keyword will be ignored.SELECT * FROM company WHERE name='people data labs'
sizeThe batch size or the maximum number of matched records to return for this query if they exist. Must be between 1 and 100.1100
from[LEGACY] An offset value for paginating between batches. Can be a number between 0 and 9999. Pagination can be executed up to a maximum of 10,000 records per query. NOTE, FROM CAN NOT BE USED WITH SCROLL_TOKEN IN THE SAME REQUEST00, 100, 200 ...
scroll_tokenAn offset key for paginating between batches. Unliked the legacy from parameter can be used for any number of records. Each search API response returns a scroll_token which can be used to fetch the next size records.None104$14.278746
prettyWhether the output should have human-readable indentation.falsetrue
api_keyYour API key (note this can also be provided in the request header instead, as shown on the Authentication page)

Response

The HTTP Response code will be 200 for any valid request, regardless of whether records were found for your query or not. For that reason, pay close attention to the "total" value in your response object to understand query success. Each company record in the "data" array of the response counts as a single "credit" against your total package - this value has a maximum of one record by default to prevent happy accidents.

Response Fields

FieldDescriptionType
statusResponse code. See a description of our Error Codes.Integer
errorError DetailsObject
error.typeError DetailsList (String)
error.messageError DetailsString
dataData returned. See the full example response or the example company recordObject
totalNumber of records matching a given query or sql input.Integer
scroll_tokenScroll value used for paginationString

Abridged 200 Response Example (full example here):

{
    "status": 200,
    "data": [
        {
            "id": "google",
            "name": "google",
            "website": "google.com",
            ...
        },
        ...
    ],
    "total": 6,
    "scroll_token": "13.312621$543927"
}

Building a Query

It is required to provide a value for either the query parameter or the sql parameter in order to receive a successful response. The query value should align directly with the Elasticsearch DSL. SQL queries are executed using Elasticsearch SQL. Most typical query types are available but some are excluded. For all available query types see here.

When an API request is executed, the query is run directly against our Company dataset without doing any additional cleaning or pre-processing. This means that you have a ton of freedom to explore the dataset and return the perfect records. It also means that understanding the available fields can be very helpful to making successful queries.

Field descriptions can be found here and the Elasticsearch mapping underlying this api can be found here.

Walkthroughs

All code is Python.

Basic Usage

"I want to use Python to make a query and save the results to a file."

import requests, json
API_KEY = # YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

ES_QUERY = {
  "query": {
    "bool": {
      "must": [
        {"term": {"website": "google.com"}},
      ]
    }
  }
}

P = {
  'query': json.dumps(ES_QUERY),
  'size': 10,
  'pretty': True
}

response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()

if response["status"] == 200:
  data = response['data']
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")
  print(f"successfully grabbed {len(data)} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The carrier pigeons lost motivation in flight. See error and try again.")
  print("Error:", response)
import requests, json

API_KEY = # YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

SQL_QUERY = \
"""
  SELECT * FROM company
  WHERE website='google.com';
 """

P = {
  'sql': SQL_QUERY,
  'size': 10,
  'pretty': True
}

response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()

if response["status"] == 200:
  data = response['data']
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")
  print(f"successfully grabbed {len(data)} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The carrier pigeons lost motivation in flight. See error and try again.")
  print("error:", response)
# Elasticsearch
curl -X GET 'https://api.peopledatalabs.com/v5/company/search' \
-H 'X-Api-Key: xxxx' \
--data-raw '{
  "size": 10,
  "query": {
    "bool": {
      "must": [
        {"term": {"website": "google.com"}},
      ]
    }
  }
}'

# SQL
curl -X GET \
  'https://api.peopledatalabs.com/v5/company/search' \
  -H 'X-Api-Key: xxxx' \
  --data-raw '{
    "size": 10,
    "sql": "SELECT * FROM company WHERE website='\''google.com'\'';"
}'

Company Search by Tags

"I want to find US-based companies tagged as 'big data' in the financial services industry'."

import requests, json
API_KEY = # YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

ES_QUERY = {
  "query": {
    "bool": {
      "must": [
        {"term": {"tags": "big data"}},
        {"term": {"industry": "financial services"}},
        {"term": {"location.country": "united states"}}
      ]
    }
  }
}

P = {
  'query': json.dumps(ES_QUERY),
  'size': 10,
  'pretty': True
}

response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()

if response["status"] == 200:
  data = response['data']
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")
  print(f"successfully grabbed {len(data)} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The carrier pigeons lost motivation in flight. See error and try again.")
  print("Error:", response)
import requests, json
API_KEY = # YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

SQL_QUERY = \
"""
  SELECT * FROM company
  WHERE tags='big data'
  AND industry='financial services'
  AND location.country='united states';
 """

P = {
  'sql': SQL_QUERY,
  'size': 10,
  'pretty': True
}


response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()

if response["status"] == 200:
  data = response['data']
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")
  print(f"successfully grabbed {len(data)} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The carrier pigeons lost motivation in flight. See error and try again.")
  print("Error:", response)

Sales and Marketing

(search by description keywords)

"I want to find companies offering account based marketing services in the united states."

import requests, json
API_KEY = #YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

ES_QUERY = {
  "query": {
    "bool": {
      "must": [
        {"match": {"summary": "account based marketing"}},
        {"term": {"location.country" : "united states"}}
      ]
    }
  }
}

P = {
  'query': json.dumps(ES_QUERY),
  'size': 100
}

response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()

if response["status"] == 200:
  
  data = response['data']
  
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")
  
  print(f"successfully grabbed {len(data)} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The eager beaver was not so eager. See error and try again.")
  print("error:", response)
import requests, json
API_KEY = #YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

SQL_QUERY = \
f"""
  SELECT * FROM company
  WHERE MATCH(summary, 'account based marketing')
  AND location.country='united states';
"""

P = {
  'sql': SQL_QUERY,
  'size': 100
}

response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()



if response["status"] == 200:
  
  data = response['data']
  
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")
  
  print(f"successfully grabbed {len(data)} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The eager beaver was not so eager. See error and try again.")
  print("error:", response)

Investment Research

(search by industry, size and location)

"I want to find 100 small biotech companies headquartered in the San Francisco area with under 50 employees ."

import requests, json
API_KEY = #YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
"Content-Type": "application/json",
"X-api-key": API_KEY
}

# https://pdl-prod-schema.s3-us-west-2.amazonaws.com/14.0/enums/industry.txt
# for enumerated possible values of industry

# https://pdl-prod-schema.s3-us-west-2.amazonaws.com/14.0/enums/job_company_size.txt
# for enumerated possible values of company sizes

ES_QUERY = {
  "query": {
    "bool": {
      "must": [
        {"terms": {"size": ["1-10", "11-50"]}},
        {"term": {"industry" : "biotechnology"}},
        {"term": {"location.locality": "san francisco"}}
      ]
    }
  }
}

P = {
    "query": json.dumps(ES_QUERY),
    "size": 100
}

response = requests.get(
    PDL_URL,
    headers=H,
    params=P
).json()

if response["status"] == 200:
  
  data = response['data']
  
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")

  print(f"successfully grabbed {len(response['data'])} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The eager beaver was not so eager. See error and try again.")
  print("error:", response)
import requests, json
API_KEY = #YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
"Content-Type": "application/json",
"X-api-key": API_KEY
}

# https://pdl-prod-schema.s3-us-west-2.amazonaws.com/14.0/enums/industry.txt
# for enumerated possible values of industry

# https://pdl-prod-schema.s3-us-west-2.amazonaws.com/14.0/enums/job_company_size.txt
# for enumerated possible values of company sizes

SQL_QUERY = \
f"""
  SELECT * FROM company
  WHERE size IN ('1-10, '11-50')
  AND industry = 'biotechnology'
  AND location.locality='san francisco';
"""

P = {
  'sql': SQL_QUERY,
  'size': 100
}

response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()

if response["status"] == 200:
  
  data = response['data']
  
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")

  print(f"successfully grabbed {len(response['data'])} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The eager beaver was not so eager. See error and try again.")
  print("error:", response)

Bulk Retrieval

"I want to find all the companies in the 'automotive' industry in the Detroit area and save them to a csv."

🚧

High Credit Usage Code Below

The code example below illustrates pulling all the company profiles in a metro, and is meant primarily for demonstrating the use of the scroll_token parameter when retrieving large amounts of records. As a result, this code mostly illustrative meaning it can use up a lot of credits, and doesn't have any error handling. The MAX_NUM_RECORDS_LIMIT parameter in the example below sets the maximum number of profiles (e.g. credits) that will be pulled, so please set that accordingly when testing this example.

import requests, json, time, csv

API_KEY = # ENTER YOUR API KEY

# Limit the number of records to pull (to prevent accidentally using up 
# more credits than expected when testing out this code).
MAX_NUM_RECORDS_LIMIT = 150 # The maximum number of records to retrieve
USE_MAX_NUM_RECORDS_LIMIT = True # Set to False to pull all available records

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

ES_QUERY = {
  'query': {
    'bool': {
      'must': [
        {'term': {'industry': "automotive"}},
        {'term': {'location.metro': "detroit, michigan"}}
      ]
    }
  }
}

P = {
  'query': json.dumps(ES_QUERY),
  'size': 100,
  'pretty': True
}

# Pull all results in multiple batches
batch = 1
all_records = []
start_time = time.time()
found_all_records = False
continue_scrolling = True

while continue_scrolling and not found_all_records: 

  # Check if we reached the maximum number of records we wanted to pull
  if USE_MAX_NUM_RECORDS_LIMIT:
    num_records_to_request = MAX_NUM_RECORDS_LIMIT - len(all_records)
    P['size'] = max(0, min(100, num_records_to_request))
    if num_records_to_request == 0:
      print(f"Stopping - reached maximum number of records to pull "
            f"[MAX_NUM_RECORDS_LIMIT = {MAX_NUM_RECORDS_LIMIT}]")
      break

  # Send Response
  response = requests.get(
    PDL_URL,
    headers=H,
    params=P
  ).json()

  # Check response status code:
  if response['status'] == 200:
    all_records.extend(response['data'])
    print(f"Retrieved {len(response['data'])} records in batch {batch} "
          f"- {response['total'] - len(all_records)} records remaining")
  else:
    print(f"Error retrieving some records:\n\t"
          f"[{response['status']} - {response['error']['type']}] "
          f"{response['error']['message']}")
  
  # Get scroll_token from response
  if 'scroll_token' in response:
    P['scroll_token'] = response['scroll_token']
  else:
    continue_scrolling = False
    print(f"Unable to continue scrolling")

  batch += 1
  found_all_records = (len(all_records) == response['total'])
  time.sleep(6) # avoid hitting rate limit thresholds
 
end_time = time.time()
runtime = end_time - start_time
        
print(f"Successfully recovered {len(all_records)} profiles in "
      f"{batch} batches [{round(runtime, 2)} seconds]")

# Save profiles to csv (utility function)
def save_profiles_to_csv(profiles, filename, fields=[], delim=','):
  # Define header fields
  if fields == [] and len(profiles) > 0:
      fields = profiles[0].keys()
  # Write csv file
  with open(filename, 'w') as csvfile:
    writer = csv.writer(csvfile, delimiter=delim)
    # Write Header:
    writer.writerow(fields)
    # Write Body:
    count = 0
    for profile in profiles:
      writer.writerow([ profile[field] for field in fields ])
      count += 1
  print(f"Wrote {count} lines to: '{filename}'")

# Use utility function to save profiles to csv    
csv_header_fields = ['name', 'website', "linkedin_url",
                     'size', 'tags']
csv_filename = "all_company_profiles.csv"
save_profiles_to_csv(all_records, csv_filename, csv_header_fields)
import requests, json, time, csv

API_KEY = # ENTER YOUR API KEY

# Limit the number of records to pull (to prevent accidentally using up 
# more credits than expected when testing out this code).
MAX_NUM_RECORDS_LIMIT = 150 # The maximum number of records to retrieve
USE_MAX_NUM_RECORDS_LIMIT = True # Set to False to pull all available records

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
  'Content-Type': "application/json",
  'X-api-key': API_KEY
}

SQL_QUERY = \
f"""
  SELECT * FROM company
  WHERE industry = 'automotive'
  AND location.metro='detroit, michigan';
"""

P = {
  'sql': SQL_QUERY,
  'size': 100,
  'pretty': True
}

# Pull all results in multiple batches
batch = 1
all_records = []
start_time = time.time()
found_all_records = False
continue_scrolling = True

while continue_scrolling and not found_all_records: 

  # Check if we reached the maximum number of records we wanted to pull
  if USE_MAX_NUM_RECORDS_LIMIT:
    num_records_to_request = MAX_NUM_RECORDS_LIMIT - len(all_records)
    P['size'] = max(0, min(100, num_records_to_request))
    if num_records_to_request == 0:
      print(f"Stopping - reached maximum number of records to pull "
            f"[MAX_NUM_RECORDS_LIMIT = {MAX_NUM_RECORDS_LIMIT}]")
      break

  # Send Response
  response = requests.get(
    PDL_URL,
    headers=H,
    params=P
  ).json()

  # Check response status code:
  if response['status'] == 200:
    all_records.extend(response['data'])
    print(f"Retrieved {len(response['data'])} records in batch {batch} "
          f"- {response['total'] - len(all_records)} records remaining")
  else:
    print(f"Error retrieving some records:\n\t"
          f"[{response['status']} - {response['error']['type']}] "
          f"{response['error']['message']}")
  
  # Get scroll_token from response
  if 'scroll_token' in response:
    P['scroll_token'] = response['scroll_token']
  else:
    continue_scrolling = False
    print(f"Unable to continue scrolling")

  batch += 1
  found_all_records = (len(all_records) == response['total'])
  time.sleep(6) # avoid hitting rate limit thresholds
 
end_time = time.time()
runtime = end_time - start_time
        
print(f"Successfully recovered {len(all_records)} profiles in "
      f"{batch} batches [{round(runtime, 2)} seconds]")

# Save profiles to csv (utility function)
def save_profiles_to_csv(profiles, filename, fields=[], delim=','):
  # Define header fields
  if fields == [] and len(profiles) > 0:
      fields = profiles[0].keys()
  # Write csv file
  with open(filename, 'w') as csvfile:
    writer = csv.writer(csvfile, delimiter=delim)
    # Write Header:
    writer.writerow(fields)
    # Write Body:
    count = 0
    for profile in profiles:
      writer.writerow([ profile[field] for field in fields ])
      count += 1
  print(f"Wrote {count} lines to: '{filename}'")

# Use utility function to save profiles to csv    
csv_header_fields = ['name', 'website', "linkedin_url",
                     'size', 'tags']
csv_filename = "all_company_profiles.csv"
save_profiles_to_csv(all_records, csv_filename, csv_header_fields)

Affiliate Lookup

(search by affiliated companies)

"I want to find all the companies that are affiliated with Amazon."

import requests, json
API_KEY = #YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
"Content-Type": "application/json",
"X-api-key": API_KEY
}

ES_QUERY = {
  "query": {
    "bool": {
      "must": [
        {"term": {"affiliated_profiles": "amazon"}}
      ]
    }
  }
}

P = {
    "query": json.dumps(ES_QUERY),
    "size": 100
}

response = requests.get(
    PDL_URL,
    headers=H,
    params=P
).json()

if response["status"] == 200:
  data = response['data']
  
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")

  print(f"successfully grabbed {len(response['data'])} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The eager beaver was not so eager. See error and try again.")
  print("error:", response)
import requests, json
API_KEY = #YOUR API KEY

PDL_URL = "https://api.peopledatalabs.com/v5/company/search"

H = {
"Content-Type": "application/json",
"X-api-key": API_KEY
}

SQL_QUERY = \
f"""
  SELECT * FROM company
  WHERE affiliated_profiles = 'amazon';
"""

P = {
  'sql': SQL_QUERY,
  'size': 100
}

response = requests.get(
  PDL_URL,
  headers=H,
  params=P
).json()

if response["status"] == 200:
  
  data = response['data']
  
  with open("my_pdl_search.jsonl", "w") as out:
    for record in data:
      out.write(json.dumps(record) + "\n")

  print(f"successfully grabbed {len(response['data'])} records from pdl")
  print(f"{response['total']} total pdl records exist matching this query")
else:
  print("NOTE. The eager beaver was not so eager. See error and try again.")
  print("error:", response)

Query Limitations

The following Elasticsearch query types will be accepted:

Most specialized options are disabled, such as boosting and custom scoring. No aggregations.

Any SQL query that translates to the above available query types via the ES SQL translate API will be accepted. This means most basic SQL. No joins, groupbys, etc.

Any array found in the query (such as a terms array) will have a hard limit of 100 elements. Any query containing an array surpassing this limit will be rejected.

Full Example Response

{
  "status": 200,
  "data": [
    {
      "id": "peopledatalabs",
      "website": "peopledatalabs.com",
      "name": "people data labs",
      "founded": 2015,
      "size": "11-50",
      "location": {
        "name": "san francisco, california, united states",
        "locality": "san francisco",
        "region": "california",
        "metro": "san francisco, california",
        "country": "united states",
        "continent": "north america",
        "street_address": "455 market street",
        "address_line_2": "suite 1670",
        "postal_code": "94105",
        "geo": "37.77,-122.41"
      },
      "industry": "computer software",
      "facebook_url": "facebook.com/peopledatalabs",
      "twitter_url": "twitter.com/peopledatalabs",
      "linkedin_url": "linkedin.com/company/peopledatalabs",
      "linkedin_id": "18170482",
      "email_domains": [],
      "ticker": null,
      "type": "private",
      "profiles": [
        "linkedin.com/company/peopledatalabs",
        "linkedin.com/company/18170482",
        "facebook.com/peopledatalabs",
        "twitter.com/peopledatalabs",
        "crunchbase.com/organization/talentiq"
      ],
      "tags": [
        "data",
        "people data",
        "data science",
        "artificial intelligence",
        "data and analytics",
        "machine learning",
        "analytics",
        "database",
        "software",
        "developer apis"
      ],
      "summary": "people data labs builds people data. \n\nuse our dataset of 1.5 billion unique person profiles to build products, enrich person profiles, power predictive modeling/ai, analysis, and more. we work with technical teams as their engineering focused people data partner. \n\nwe work with thousands of data science teams as their engineering focused people data partner. these include enterprises like adidas, ebay, and acxiom, as well as startups like madison logic, zoho, and workable. we are a deeply technical company, and are backed by two leading engineering venture capital firms - founders fund and 8vc.",
      "headline": "Your Single Source of Truth",
      "alternative_names": [],
      "alternative_domains": [],
      "affiliated_profiles": []
    }
  ],
  "scroll_token": "13.312621$5439277"
  "total": 6
}

Full Field Mapping

{
  "company_v14.3" : {
    "aliases" : {
      "company" : { }
    },
    "mappings" : {
      "_routing" : {
        "required" : true
      },
      "date_detection" : false,
      "properties" : {
        "affiliated_profiles" : {
          "type" : "keyword"
        },
        "alternative_domains" : {
          "type" : "keyword"
        },
        "alternative_names" : {
          "type" : "keyword",
          "doc_values" : false,
          "fields" : {
            "text" : {
              "type" : "text"
            }
          },
          "ignore_above" : 256
        },
        "email_domains" : {
          "type" : "keyword"
        },
        "facebook_url" : {
          "type" : "keyword"
        },
        "founded" : {
          "type" : "integer",
          "doc_values" : false
        },
        "headline" : {
          "type" : "text"
        },
        "id" : {
          "type" : "keyword"
        },
        "industry" : {
          "type" : "keyword"
        },
        "linkedin_id" : {
          "type" : "keyword"
        },
        "linkedin_url" : {
          "type" : "keyword"
        },
        "location" : {
          "properties" : {
            "address_line_2" : {
              "type" : "keyword"
            },
            "continent" : {
              "type" : "keyword"
            },
            "country" : {
              "type" : "keyword"
            },
            "geo" : {
              "type" : "geo_point",
              "doc_values" : false
            },
            "locality" : {
              "type" : "keyword"
            },
            "metro" : {
              "type" : "keyword"
            },
            "name" : {
              "type" : "keyword"
            },
            "postal_code" : {
              "type" : "keyword"
            },
            "region" : {
              "type" : "keyword"
            },
            "street_address" : {
              "type" : "keyword"
            }
          }
        },
        "name" : {
          "type" : "keyword",
          "doc_values" : false,
          "fields" : {
            "text" : {
              "type" : "text"
            }
          },
          "ignore_above" : 256
        },
        "profiles" : {
          "type" : "keyword"
        },
        "size" : {
          "type" : "keyword"
        },
        "summary" : {
          "type" : "text"
        },
        "tags" : {
          "type" : "keyword",
          "doc_values" : false,
          "fields" : {
            "text" : {
              "type" : "text"
            }
          },
          "ignore_above" : 256
        },
        "ticker" : {
          "type" : "keyword"
        },
        "twitter_url" : {
          "type" : "keyword"
        },
        "type" : {
          "type" : "keyword"
        },
        "website" : {
          "type" : "keyword"
        }
      }
    },
    "settings" : {
      "index" : {
        "mapping" : {
          "total_fields" : {
            "limit" : "2000"
          }
        },
        "refresh_interval" : "-1",
        "number_of_shards" : "20",
        "provided_name" : "company_v14.3",
        "creation_date" : "1621382632946",
        "requests" : {
          "cache" : {
            "enable" : "false"
          }
        },
        "number_of_replicas" : "0",
        "queries" : {
          "cache" : {
            "enabled" : "false"
          }
        },
        "uuid" : "0HMBrZSvT56bEqauP-OmyQ",
        "version" : {
          "created" : "7070099"
        }
      }
    }
  }
}

All Field Descriptions

See this doc.


Did this page help you?