US20140278575A1 - Systems And Methods Of Processing Insurance Data Using A Web-Scale Data Fabric - Google Patents

Systems And Methods Of Processing Insurance Data Using A Web-Scale Data Fabric Download PDF

Info

Publication number
US20140278575A1
US20140278575A1 US14/201,046 US201414201046A US2014278575A1 US 20140278575 A1 US20140278575 A1 US 20140278575A1 US 201414201046 A US201414201046 A US 201414201046A US 2014278575 A1 US2014278575 A1 US 2014278575A1
Authority
US
United States
Prior art keywords
data
insurance
store
memory
data store
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/201,046
Inventor
Alex Anton
Tim G. Sanidas
Jeff Perschall
Michael Bernico
Michael K. Cook
Lynn Calvo
V. Rao Kanneganti
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Farm Mutual Automobile Insurance Co
Original Assignee
State Farm Mutual Automobile Insurance Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Farm Mutual Automobile Insurance Co filed Critical State Farm Mutual Automobile Insurance Co
Priority to US14/201,046 priority Critical patent/US20140278575A1/en
Assigned to STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY reassignment STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ANTON, ALEX, BERNICO, MICHAEL, COOK, MICHAEL K., PERSCHALL, JEFF, SANIDAS, TIM G.
Assigned to STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY reassignment STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: COBI SYSTEMS, LLC
Assigned to COBI SYSTEMS, LLC reassignment COBI SYSTEMS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KANNEGANTI, V. RAO, CALVO, LYNN
Publication of US20140278575A1 publication Critical patent/US20140278575A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/113Details of archiving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/256Integrating or interfacing systems involving database management systems in federated or virtual databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • G06F17/30424
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols

Definitions

  • the present disclosure relates to systems and methods for processing, storing, and accessing “big data,” and, more particularly, to platforms and techniques for search-based applications to extract information from a large dataset to perform operational activities and processing pertaining to the insurance industry.
  • the system comprises a stream ingestion hardware component adapted to receive data relating to an actionable event and configured to generate a data object based on the received data, a historical data store adapted to store a plurality of customer insurance policies, and a work-in-process (WIP) data store adapted to communicate with the stream ingestion hardware component and with the historical data store.
  • the WIP data store is configured to receive the data object from the stream ingestion hardware, and use the data object to retrieve, from the historical data store, insurance data associated with at least one of the plurality of customer insurance policies.
  • the system further comprises a search application adapted to interface with the WIP data store and configured to receive, from a requesting entity, a request to access at least a portion of the insurance data, retrieve at least the portion of the insurance data from the WIP data store, and provide at least the portion of the insurance data to the requesting entity.
  • a search application adapted to interface with the WIP data store and configured to receive, from a requesting entity, a request to access at least a portion of the insurance data, retrieve at least the portion of the insurance data from the WIP data store, and provide at least the portion of the insurance data to the requesting entity.
  • the method comprises receiving, from a source, a message related to an event, generating a data object based on the message, and examining the data object to determine that the event is an actionable event related to insurance processing. Responsive to examining the data object, the method further retrieves, from a historical data store using the data object, insurance data associated with at least one customer insurance policy and stores the insurance data in a cache memory. Additionally, the method comprises receiving, from a requesting entity, a request to access at least a portion of the insurance data, responsive to receiving the request, retrieving at least the portion of the insurance data from the cache memory, and providing at least the portion of the insurance data to the requesting entity.
  • FIG. 1 is a block diagram of an exemplary web-scale grid on which a web-scale data processing method may operate in accordance with some embodiments;
  • FIG. 2 is a block diagram of an exemplary web-scale federated database on which a web-scale data storage method may operate in accordance with some embodiments;
  • FIG. 3 is a block diagram of an exemplary web-scale stream processor on which a web-scale data storage stream processing method may operate in accordance with some embodiments;
  • FIG. 4 is a block diagram of an exemplary web-scale data-local processor on which a web-scale data-local processing method may operate in accordance with some embodiments;
  • FIG. 5 is a block diagram of an exemplary web-scale data fabric system on which a web-scale information retrieval method may operate in accordance with some embodiments;
  • FIG. 6 is a block diagram of an exemplary web-scale master data management system on which a web-scale master data management method may operate in accordance with some embodiments;
  • FIG. 7 is a block diagram of an exemplary web-scale data fabric system on which a web-scale analytics method may operate in accordance with some embodiments
  • FIG. 8 is a block diagram of an exemplary web-scale data fabric system on which a web-scale search-based application method may operate in accordance with some embodiments;
  • FIG. 9 illustrates an exemplary use case for processing insurance data in accordance with some embodiments.
  • FIG. 10 is a flow diagram illustrating an exemplary method of processing insurance data in accordance with some embodiments.
  • FIG. 11 is a block diagram of a computing device in accordance with some embodiments.
  • processing big data may be accomplished in one of two ways.
  • the first way seeks to supplement present relational database and data movement technologies with Web-proven technologies such as ApacheTM Hadoop®.
  • Web-proven technologies such as ApacheTM Hadoop®.
  • the second seeks to adopt an approach that various Web companies have with non-relational databases, along with implementing processing that moves function-to-data on commodity hardware and open source software.
  • the lure of the first approach is that companies can depend on large software vendors and familiar technologies to evolve toward web-scale processing.
  • the lure of the second approach is that it can be scaled and is more economical than current relational database or data movement technology techniques.
  • SDN software-defined networking
  • Intel® coprocessor can enable high performance computing.
  • the Intel® coprocessor for example the Xeon PhiTM coprocessor, holds the promise of reducing software development complexity for high performance computing relative to real-time processing solutions.
  • SDN can be used in combination with the coprocessor in cases in which the SDN isolates network traffic resulting from high performance computing from other, non-high performance computing network traffic.
  • function-to-data processing can be employed in non-relational databases and in-database processing can be employed in relational databases.
  • Various hands-on research indicates that business intelligence vendors are introducing function-to-data processing on technologies such as ApacheTM HBaseTM and ApacheTM Hadoop®. These advancements can efficiently and effectively bring big data capabilities within reach of business partners.
  • Multi-structured data can be a combination of unstructured, semi-structured, and structured data.
  • WSDF Web-Scale Data Fabric
  • the term “insurance policy,” as used herein, generally refers to a contract between an insurer and an insured.
  • the insurer pays for damages to the insured which are caused by covered perils, acts or events as specified by the language of the insurance policy.
  • the payments from the insured are generally referred to as “premiums,” and typically are paid on behalf of the insured over time at periodic intervals.
  • the amount of the damages payment is generally referred to as a “coverage amount” or a “face amount” of the insurance policy.
  • An insurance policy may remain (or have a status or state of) “in-force” while premium payments are made during the term or length of coverage of the policy as indicated in the policy.
  • An insurance policy may “lapse” (or have a status or state of “lapsed”), for example, when premium payments are not being paid, when a cash value of a policy falls below an amount specified in the policy (e.g., for variable life or universal life insurance policies), or if the insured or the insurer cancels the policy.
  • an insurance provider is used interchangeably herein to generally refer to a party or entity (e.g., a business or other organizational entity) that provides insurance products, e.g., by offering and issuing insurance policies.
  • an insurance provider may be an insurance company.
  • a person or customer (or an agent of the person or customer) of an insurance provider fills out an application for an insurance policy.
  • the application may undergo underwriting to assess the eligibility of the party and/or desired insured article or entity to be covered by the insurance policy, and, in some cases, to determine any specific terms or conditions that are to be associated with the insurance policy, e.g., amount of the premium, riders or exclusions, waivers, and the like.
  • insurance policy may be in-force, e.g., the policyholder is enrolled.
  • FIGS. 1-8 are merely exemplary and can include different combinations and aggregations of components.
  • the data nodes, application cache nodes, access nodes, and index nodes and any software associated therewith as depicted in some or all of FIGS. 1-8 may be combined into one or more hardware components to perform the functionalities as described herein. It should be appreciated that other combinations of components are envisioned.
  • Section 1 Web-Scale Grid
  • FIG. 1 a configuration design 100 that depicts the core of the WSDF implementing an elastic platform or “Grid.”
  • This Grid is composed of a plurality of nodes (1-n) ( 105 ) that can be networked through software-defined connectivity.
  • “elastic” refers to the ability to add or remove nodes 105 within the Grid to accommodate capacity requirements and/or failure replacement. Although only two nodes 105 are depicted in FIG. 1 , it should be appreciated that other amounts of nodes 105 are envisioned.
  • each node 105 can be designed to be equipped with a mid-range multi-core central processing unit (CPU) 106 , direct-attached storage (DAS) 107 consisting of a set of drives sometimes referred to as “just a bunch of disks” (JBOD), random access memory (RAM) 108 , and one or more coprocessor cards 109 .
  • CPU central processing unit
  • DAS direct-attached storage
  • JBOD very a bunch of disks
  • RAM random access memory
  • coprocessor cards 109 one or more coprocessor cards 109 .
  • the precise configuration of each node 105 can depend on its purpose for addressing web-scale requirements.
  • Networking between nodes 105 is enabled with a networking device (such as a network switch 110 as shown in FIG. 1 ) where connectivity can be defined with software.
  • the precise configuration of network connectivity depends on the purpose for addressing web-scale requirements.
  • a stack of software 111 may be advantageous.
  • the software stack 111 is designed to provide the kernel or operating system for the Grid.
  • the software stack 111 can be configured to include Linux 2.6+64 bit framework, the Hadoop® 2.0+ framework, and/or other frameworks. The precise stack configuration for each node 105 depends on the purpose for addressing web-scale requirements. It should be appreciated that the software stack 111 can include other frameworks or combinations of frameworks.
  • the combination of the mid-range multi-core CPU 106 , the coprocessor card 109 , the RAM 108 , and a software-defined network (SDN) 112 can provide the computational capabilities for the Grid. It should be appreciated that additional coprocessor cards 109 and/or nodes 105 can enable additional computing scale. In some configurations, this computational design can be a hybrid of high-performance computing (HPC) and many-task computing (MTC) grids. In some embodiments, the ApacheTM Hadoop® YARN sub-project can enable the coexistence of HPC and MTC computation types within the same Grid.
  • This hybrid design can be further enhanced through the use of the SDN 112 as well as a mid-range multi-core CPU.
  • the SDN 112 can be used to isolate the network connectivity requirements for computation types from other competing network traffic. It is expected that this configuration may facilitate lower cost computing and network connectivity, along with lower power demands per flop.
  • the DAS 107 on each of the nodes 105 can be made available through the ApacheTM Hadoop® Distributed File System (HDFS), combined with the SDN 112 , to provide the storage capabilities for the Grid. Additional drives and/or nodes with drives can enable additional storage scale.
  • the SDN 112 can be used to isolate the network connectivity requirements for storage from other competing network traffic. It is expected that this configuration or configurations similar thereto can facilitate lower cost network connectivity associated with storage per gigabyte.
  • the network devices used within the Grid are designed for operation using the OpenFlow protocol.
  • OpenFlow combined with the SDN 112 can be referred to herein as a Network Operating System (NOS) 115 . It is expected that this configuration of the NOS 115 can facilitate lower cost network devices and lower power demands.
  • NOS Network Operating System
  • the web-scale Grid uses the SDN 112 to manage connectivity and uses the coprocessor 109 accelerator for distributed parallel computation.
  • the CPU 106 can be used in combination with the coprocessor 109 for horizontal and vertical scaling to provide distributed parallel computation.
  • the web-scale Grid can facilitate storage using both DAS and RAM, whereby the combination of the coprocessor 109 and the storage enables the Grid to achieve web-scale.
  • Section 2 Web-Scale Federated Database
  • a federated database design 200 is deployed on the web-scale grid as discussed with respect to FIG. 1 .
  • the federated database can be designed and configured for storage of transactions with low latency. From a hardware perspective, there are several nodes configured to store various data and several nodes configured to implement an in-memory cache. The number of nodes can be directly related to the scalability requirements for storage or low latency data ingestion.
  • One or more in-memory caches 225 can be designed and configured for distribution across one or more various data centers 219 , thus enabling a distributed cache.
  • the Grid By spanning data centers across a wide area network (WAN), the Grid can be positioned for high availability despite a disaster or disruption within any given data center 219 .
  • object transaction data that originates from either machine sources 220 (such as a home or automobile) or applications 221 is stored within the in-memory cache 225 before being asynchronously relayed and replicated using data transfer objects (DTO) to a log-structured merge-tree (LSM-tree) database 226 within each data center 219 .
  • ApacheTM HBaseTM is an example of a LSM-tree database.
  • the in-memory cache 225 plus the LSM-tree database 226 per data center 219 can comprise the federated database.
  • the LSM-tree databases 226 can be optimized for throughput to support low latency data ingestion.
  • DTOs can be enhanced with a timestamp as they are relayed to the LSM-tree databases 226 in each data center 219 .
  • the timestamp combined with a globally unique identifier (GUID) for the corresponding DTO can provide the basis for a multi-version concurrency control dataset (MCC).
  • MCC multi-version concurrency control dataset
  • Transactions are stored with the MCC where each change to the DTO is appended.
  • the resulting transaction history facilitates a point-in-time rollback of any given object transaction.
  • the internal MCC data design is independent of the type of database, thus enabling portability across other LSM-tree databases.
  • Storage of data in the LSM-tree databases 226 can be designed and configured such that object transaction data can be range-partitioned for distribution across the apportioned Grid nodes. This range partitioning can be based on the GUID and timestamp key concatenation.
  • Each object transaction can also be designed for optimized storage, with or without encoding.
  • the column family and column descriptor can be encoded. In some cases, codes, descriptions, and other metadata such as data type and length can be stored separately in a cross-reference table.
  • the object transaction or DTO can then be (de)serialized and mapped into LSM-tree database data types. For implementations using HBaseTM, the DTO can be (de)serialized into a tuple where each column can be represented in byte arrays.
  • the in-memory cache 225 can be designed and configured to evict the least recently used (LRU) data.
  • LRU least recently used
  • the in-memory cache 225 can be designed to perform an on-demand read-through from the LSM-tree database 226 , with an affinity toward the database within the same data center 219 (if available).
  • the federated database design pattern can take advantage of eventual consistency to provide availability that spans multiple data centers without being dependent on database log-based replication.
  • the federated database design 200 can also provide storage for multi-structured data ingested through streaming. See the Web-Scale Stream Processor (Section 3) for additional details regarding this implementation.
  • the web-scale federated database design 200 utilizes an in-memory key value object cache in concert with the LSM-tree databases 226 for low latency transaction ingestion with consistency in cache to eventual consistency among the LSM-tree databases 226 across the data centers 219 .
  • the web-scale federated database design 200 utilizes MCC on multi-structured data for “discovery-friendly” analytics with positioning for automated storage optimization.
  • Section 3 Web-Scale Stream Processor
  • an extension to both the Web-Scale Grid (Section 1) and the Web-Scale Federated Database (Section 2) is a stream processor implementation 300 .
  • the stream processor implementation 300 is designed and configured to ingest and process multi-structured data in-stream with low latency through messaging.
  • several nodes can be leveraged with memory (for processing) and combined with storage (for high availability).
  • nodes can be grouped into clusters with a design configuration that federates clusters across data centers 319 to manage capacity while addressing availability in case of disaster at any of the data centers 319 .
  • AMQP advanced message queuing protocol
  • the design utilizes the advanced message queuing protocol (AMQP) open standard.
  • AMQP enables interoperability as well as support for the ingestion of multi-structured data.
  • messages are ingested through AMQP brokers hosted on federated clusters of the web-scale grid nodes.
  • a front office cluster 325 can address low latency ingestion and processing facilitated primarily with RAM.
  • the back office cluster 326 can address processing with less demanding latency facilitated primarily with DAS.
  • One of each cluster type is enabled within the corresponding data center 319 .
  • Messages ingested with the front office cluster 325 are published to all back office clusters 326 within each data center 319 to enable high availability in case of disaster.
  • messages can be processed by consumers that subscribe to queues.
  • CEP complex event processing
  • consumers are designed to work with an in-memory distributed cache.
  • the CEP functionalities may be implemented by the CEP cluster 327 .
  • This in-memory distributed cache used within the stream processor implementation 300 is shared with the web-scale federated database.
  • data can be accessed using continuous query processing to determine occurrences of predefined events.
  • CEP is also designed to work with semantic processing software for classifying unstructured data in messages. That classification is subsequently published to another queue for further processing.
  • semantic classification software is ApacheTM StanbolTM.
  • the stream processing implementation 300 is further configured to store messages on the web-scale federated database.
  • the message storing functionality can be also addressed with consumers on queues associated with the back office cluster 326 . These consumers are designed to operate in batch through a scheduler compatible with the web-scale federated database.
  • An example scheduler could be Hadoop® YARN.
  • the stream processor implementation 300 is further designed to amass ingested messages for independent subsequent processing while providing interoperability and extensibility through open messaging for multi-structured data.
  • the stream processor implementation 300 can use in-memory cache in concert with AMQP messaging for low latency CEP.
  • CEP is also designed to work with semantic processing software for classifying unstructured data in messages.
  • Section 4 Web-Scale Data-Local Processor
  • an implementation 400 includes the web-scale grid as discussed with respect to Section 1 deployed on the web-scale federated database as discussed with respect to Section 2.
  • the data-local processor is designed to enable concurrent, distributed, and parallel computation of the data residing on web-scale grid nodes (as discussed with respect to Section 1), through use of common statistical and semantic classification software.
  • one or more data-local processor nodes 405 are designed to enable statistical software operations using either a high performance computing (HPC) with message passing interface (MPI) and/or many-task computing (MTC) with a Map Reduce (MR) programming or computational model.
  • HPC high performance computing
  • MTC message passing interface
  • MR Map Reduce
  • Each of the nodes 405 is equipped with a combination of a mid-range multi-core CPU 406 , one or more coprocessor cards 409 , and RAM 408 for computation on data local to the corresponding node 405 .
  • Each of the nodes 405 is also equipped to facilitate the execution of semantic classification software.
  • An example of statistical software is “R” and an example of semantic classification software is ApacheTM StanbolTM.
  • the data-local processor nodes 405 are further designed to enable software-defined network (SDN) connectivity in support of computational capabilities.
  • SDN software-defined network
  • Network connectivity management and operation with the SDN can provide a more effective means for enabling both programming and/or computational models to operate on the same set of nodes within the web-scale grid.
  • computation can be orchestrated with corresponding client software on a client workstation. Further, statistical programs and ontologies can be deployed from this client workstation.
  • the web-scale data-local processor implementation 400 can utilize a combination of high-performance computing (HPC) and many-task computing (MTC) facilitated by SDN, the one or more coprocessor cards 409 , and/or data locality based-computation with direct-attached storage (DAS).
  • HPC high-performance computing
  • MTC many-task computing
  • DAS direct-attached storage
  • the CPU 406 can be used in combination with the one or more coprocessor cards 409 for horizontal and vertical scaling to provide distributed parallel computation.
  • the web-scale Grid can facilitate storage using both DAS and RAM, whereby the combination of the one or more coprocessors 409 and the storage enables the Grid to achieve web-scale. Further, the use of RAM as a cache of DAS can enable data-local computation.
  • Section 5 Web-Scale Information Retrieval
  • a web-scale information retrieval implementation 500 positions the web-scale federated database (as discussed with respect to Section 2) for content management of multi-structured data along with the web-scale data-local processor (as discussed with respect to Section 4) to facilitate content classification and indexing.
  • the information retrieval implementation 500 can address index processing using the ApacheTM LuceneTM software operating with a data-local processor.
  • content processed by the information retrieval implementation 500 can be ingested through the web-scale stream processor (as discussed with respect to Section 3).
  • portions of the implementation 500 may be facilitated by one or more information retrieval applications 530 .
  • the information retrieval implementation 500 can be designed to index content incrementally as it is stored on the federated database. Access to indexes for search queries can be enabled through additional nodes that extend the web-scale grid with an additional cluster. Generated index files can be copied to this search cluster and managed periodically. For low latency indexing applications, content can be indexed on insert into the federated database, while the index cluster is updated.
  • the generated index can reference content in the federated database.
  • Search query results can include content descriptions along with a key for retrieval of content from the federated database. This content key can be the basis for retrieval of data from the federated database.
  • the index cluster can process queries using, for example, the SolrCloudTM software.
  • Each node 505 can contain index replicas and can be designed and configured to operate with high availability.
  • the number of nodes in the index cluster can be relative to the extent of search queries and volume of users.
  • each data center can include the described layout of index and search functionalities.
  • the combined deployment across data centers for information retrieval can provide availability resilience in disaster situations affecting an entire data center.
  • the design and configuration of the information retrieval implementation 500 can provide low latency indexing and search across all multi-structured data and content. Further, the design and configuration of the information retrieval implementation can provide the basis for search-based applications (SBA) to address development of both operational and analytic applications.
  • SBA search-based applications
  • Section 6 Web-Scale Master Data Management
  • a data management implementation 600 includes the web-scale grid (as discussed with respect to Section 1), the web-scale federated database (as discussed with respect to Section 2), the web-scale stream processor (as discussed with respect to Section 3), the web-scale data-local processor (as discussed with respect to Section 4), and the web-scale information retrieval (as discussed with respect to Section 5).
  • the data management implementation 600 can facilitate the collection and processing of data at extreme scale despite variety, velocity, and/or volume at a high-availability and/or disaster-recovery service level.
  • the data can be arranged and architected for management by the corresponding business, an architecture practice generally referred to as master data management. Accordingly, in some cases, the Web-Scale master data management implementation 600 can be the data architecture atop the web-scale platform.
  • data can be arranged according to its source within the federated database and, through the use of multi-version concurrency control (MCC) data design, can contain a log or history of known changes.
  • MCC multi-version concurrency control
  • an insurance claim may reference the primary named insured, claimant, vehicle, peril, and/or the policy.
  • search can be the method for acquiring the required reference data for transactions.
  • assessing the quality of master data is integral to the management of the data.
  • faceted search can be the vehicle for identifying duplicate data occurrences as well as examining spelling variances that may affect data quality.
  • the master data management implementation 600 can provide the architecture needed to map all ingested data with corresponding search indexes.
  • the master data management implementation 600 can utilize search-based master data retrieval across various multi-structured data.
  • the master data management implementation 600 can utilize classification enabled with facets to provide metrics for data quality assessments.
  • a web-scale analytics implementation 700 builds on the web-scale grid (as discussed with respect to Section 1), the web-scale federated database (as discussed with respect to Section 2), the web-scale stream processor (as discussed with respect to Section 3), the web-scale data-local processor (as discussed with respect to Section 4), the web-scale information retrieval (as discussed with respect to Section 5), and the web-scale master data management (as discussed with respect to Section 6).
  • the web-scale analytics implementation 700 can leverage the stream processor and/or the data-local processor to compute aggregates, depending on latency requirements. In particular, aggregates that are routinely used can be periodically pre-computed and stored in the federated database for shared access.
  • These pre-computed aggregates can also be indexed and accessed through information retrieval and/or correlated with master data using master data management.
  • On-the-fly aggregates can depend on grid memory and coprocessors for computation, as well as speed-through concurrency, data locality, and/or computation as data is in-flight.
  • the web-scale analytics implementation 700 can be designed for consumption through interactive visualizations. These visualizations can be generated using business intelligence (BI) tools.
  • BI tools can be hosted on a number of nodes that extend the grid.
  • These BI tools can also be designed and configured to provide self-service (i.e., user-defined) function-to-data aggregate processing using the data-local processor.
  • pre-computed aggregates can also be designed for transfer and storage to a columnar store.
  • columnar storage can provide economy-of-scale and can be well-suited for speed-of-thought analytics.
  • This columnar store can be positioned for the interim to provide continuity for BI tools that operate with SQL. It should be appreciated that equivalent speed-of-thought analytics for use within the federated database are envisioned.
  • a nested columnar data representation within the federated database can be positioned as the replacement for a columnar store.
  • the web-scale analytics implementation 700 can utilize stream processing and data-local processing to compute data aggregations, and can choose the optimal processing method based on latency requirements.
  • the web-scale analytics implementation 700 can enable self-service (i.e., user-defined) data-local processing for analytics.
  • the web-scale analytics implementation 700 can store pre-computed aggregates in a columnar store for continuity with current business intelligence (BI) tools, as well as provide speed-of-thought interactive visualizations at an economy-of-scale.
  • BI business intelligence
  • Section 8 Web-Scale Search-Based Application
  • a web-scale search-based implementation 800 that can include the web-scale grid (as discussed with respect to Section 1), the web-scale federated database (as discussed with respect to Section 2), the web-scale stream processor (as discussed with respect to Section 3), the web-scale data-local processor (as discussed with respect to Section 4), the web-scale information retrieval (as discussed with respect to Section 5), the web-scale master data management (as discussed with respect to Section 6), and the web-scale analytics (as discussed with respect to Section 7).
  • the web-scale search-based implementation 800 can be used to build both operational and analytic applications.
  • the type of applications which best utilizes the web-scale search-based implementation 800 can be referred to as a web-scale search-based application 840 .
  • Search functionality can add another dimension to the design of these web-scale search-based applications, particularly with the build for master data management as well as the basis for navigating analytics.
  • the design for search-based applications can leverage information retrieval functionalities.
  • search-based application design Some applications that are operational for processing transactions and/or facilitating applications used for analytics can be addressed through a search-based application design. This combination is distinct from other search-based design applications that are primarily analytical.
  • the search-based implementation 800 is also unique in that it includes the data-local processor and stream processor for generating analytics whereas existing designs rely on analytics provided by a search engine and/or an analytic tool that moves data-to-function.
  • the search-based application 840 can be developed using information retrieval and analytics graphic user interface (GUI) components. These GUI components are enabled with software development kits.
  • GUI graphic user interface
  • the assembled GUI can be a mash-up of visualizations from analytics and facetted navigation from information retrieval.
  • master data management is applicable with the search-based application 840 .
  • lookup functionalities of reference data to associate with a transaction may be expected for operational applications.
  • visualization of data quality metrics for master data may be expected to include integration with analytics.
  • the search-based application 840 may integrate analytic computations such as scoring an insurance claim for potential special investigation, displaying a targeted advertisement, and/or other functionalities.
  • Development of these analytic computations applied with the data-local processor and stream processor can take advantage of distributed parallel or concurrent computing with data locality or function-to-data processing. This development approach may leverage either a high performance computing (HPC) with message passing interface (MPI) and/or many-task computing (MTC) with the Map Reduce (MR) programming/computational model.
  • HPC high performance computing
  • MPI message passing interface
  • MTC many-task computing
  • MR Map Reduce
  • GUI components of the search-based application 840 can leverage an extension to the Grid.
  • the extension includes a set of nodes that host the application on containers within web application servers. These web application servers can be designed and configured to take advantage of in-memory cache for managing web sessions and to provide high availability across the data centers.
  • the search-based application 840 can include various applications to use the data storage, ingestion, and analysis systems and methods discussed herein to enable a user to perform and/or automate various tasks. For example, it may be advantageous to use a web-scale search-based application to assist with filling out and/or verifying insurance claims.
  • the search-based application 840 can be configured to fill out an insurance claim and may also leverage the techniques discussed herein to streamline the process of filling out an insurance claim. For example, if a hail storm occurs in Bloomington, Ill. on May 3, various news stories, posts on social networks, blog posts, etc. will likely be written about the storm. These stories and posts may be directly on point (e.g., “A hailstorm occurred in Bloomington today”) or may indirectly refer to the storm (e.g., “My car windshield is broken #bummer”). Using the techniques discussed above, these stories, posts, and data may be identified and analyzed using complex event processing (CEP) to determine whether a storm occurred over a particular area and/or whether the storm was severe enough to cause damage.
  • CEP complex event processing
  • analytics may determine whether the “Bloomington” of the first post refers to Bloomington, Ill. or Bloomington, Ind. by determining whether words and metadata (e.g., IP address) associated with the post are more proximate to Illinois or Indiana. Additionally, if multiple posts and stories discuss damage to property in a timeframe on or shortly after May 3, analytics may be used to estimate the likelihood and extent of damage. Further, the originally unstructured and semi-structure data from these posts and stories that have been ingested with the web-scale stream processor (as discussed with respect to Section 3) may be analyzed with structured data (e.g., telematics data, information from insurance claims, etc).
  • structured data e.g., telematics data, information from insurance claims, etc.
  • a web-scale search-based application 840 that is configured to fill out an insurance claim may compare information from these analytics to information associated with John Smith (e.g., his Bloomington, Ill. home address, the telematics data from his truck indicating that multiple sharp forces occurred at the front of the vehicle, and/or other data) to determine that the insurance claim likely relates to hail damage and to automatically populate the fields in an insurance form associated with the claim and relating to cause and extent of damage.
  • information associated with John Smith e.g., his Bloomington, Ill. home address, the telematics data from his truck indicating that multiple sharp forces occurred at the front of the vehicle, and/or other data
  • a web-scale search-based application that is configured to verify claims can determine whether a cause and/or an extent of damage (or other aspects of an insurance claim) are within a likely range based on analysis of structured, semi-structured, and unstructured data using the WSDF.
  • web-scale search-based applications can address development of both operational and analytic applications.
  • web-scale search-based applications can utilize search-based master data retrieval for transactional reference data.
  • web-scale search-based applications can utilize facetted navigation of multi-structured data with information retrieval.
  • the web-scale search-based applications can combine stream processing and data-local processing for aggregation, depending on latency requirements.
  • Section 9 Web-Scale Data Fabric Use Case
  • an example use case 900 described in this section will serve to provide a more detailed example of how the unique capabilities of the WSDF architecture may be used to enable the company or business to be more competitive, such as by streamlining insurance data initiation and processing.
  • the use case 900 described herein can be a subset of a larger use case originally designed for both business consumption (e.g., insurance operations) and to manage the infrastructure (e.g., IT systems operational) of the WSDF.
  • the use case 900 can be designed using a concept known as visual interactive intelligent infrastructure (VI3).
  • VI3-B visual interactive intelligent infrastructure
  • the use case 900 may be designed using other techniques or concepts.
  • VI3-B The business competitive advantage of VI3-B is the ability to prepopulate information in forms for a potential insurance claim based upon either a machine- or customer-generated event notification, as well as perform post-processing analytics. In embodiments, having potential insurance information prepopulated saves both the insurance customer and the insurance provider from the time burden of manually entering information to activate a claim.
  • Another advantage of VI3-B is the ability to provide proactive notification to business-to-business (B2B) services of the potential impact to their businesses should the event trigger be related to a mega-claim type of event.
  • the example use case 900 scenario starts with a significant hail storm 950 , triggering an event notification received from a streamed feed by the National Oceanic and Atmospheric Administration (NOAA) 951 .
  • NOAA National Oceanic and Atmospheric Administration
  • the event notification is ingested as an AMQP message 952 and interpreted as an actionable event.
  • the AMQP message 952 is sent as a DTO 954 to an in memory data store for work-in-process (WIP) 953 .
  • WIP work-in-process
  • Complex event processing (CEP) of the memory data store 953 can use a continuous query capability to identify the actionable event as a trigger to request that all (or some) current policy holder information within the geographical area of the hail storm be transferred from a historical data store 955 (e.g., LSM-tree and MCC database) to the in memory WIP data store 953 as a cached data object 960 .
  • a historical data store 955 e.g., LSM-tree and MCC database
  • the WIP data store 953 can initiate pre-population for a potential claim submission and store the potential claim submission in cache.
  • this transfer of data from the historical data store 955 to the in memory WIP data store 953 may be efficiently managed through operational policies defined to manage the software defined network (SDN).
  • SDN software defined network
  • damage from the hail storm to autos, homes or other items 957 covered for the customers may also trigger a first notice of loss (FNOL) event 958 through, for example, automatic sensor-based detection or from a customer contact received about a loss from the hail storm.
  • the customer contact may be an email, text message, photo, video, phone call, and/or the like.
  • the FNOL is ingested by a stream ingestion component 959 as an AMQP message and interpreted as an actionable event.
  • the AMQP message is sent as a DTO ( 954 ) to the in memory WIP data store 953 .
  • the CEP of the in memory WIP data store 953 can identify this actionable event as a trigger to attempt to match the FNOL information to one of the cached policies 960 .
  • data from additional entities 962 such as various business-to-business supporting services may also provide information related to various events that may necessitate insurance claim processing.
  • the in memory WIP data store 953 may process the data from additional entities 962 and match the data to one or more of the cached policies 960 .
  • the pre-populated object transaction is updated to reflect the receipt of FNOL and to submit a transaction to a claim system (as illustrated by 961 ).
  • the information related to the hail storm is continuously stream processed by the message broker into distributed cache of the in memory WIP data store 953 , the information is further enriched for information retrieval through low-latency indexing and semantic processing to allow the information to be searched and analyzed in near real-time and with proper context.
  • the near real-time indexing and searching capabilities in the WSDF can be enabled by using LuceneTM/SolrTM and/or coprocessors.
  • various end users from various groups such as agency 963 , claims 964 , and/or business process researchers 965 may use the search based application 966 to gain further insight into insurance policies and the processing and/or initiation thereof.
  • the agent 963 may want to query how the hail storm may be impacting his or her book of business.
  • the claim handler 964 may want to query to assess the storm's impact on financial reserves or estimate (e.g., using historical and analytical data stores) the number of claim handlers needed to manage a response to a large or mega claim event.
  • business process researchers 965 may want to assess how well claims were processed from the FNOL event to claim close.
  • the loss data that is collected from the storm could be used to assist various B2B services to prepare them for better servicing policy holders to recover from losses.
  • the master data management (MDM) capabilities can be used to ensure data integrity and consistency of policy holder data cached as a result of the hail storm event, for example by updating in the in memory WIP data store 953 and writing back updated policy information 956 to the historical data store 955 .
  • MCC multi-version concurrency control
  • MCC can be used to ensure the consistency of the historical data store 955 , whereby this same level of integrity and consistency is replicated between a WSDF data center replica entity 967 .
  • WSDF The technical capabilities of WSDF can provide the insurance provider with an opportunity to act upon information in near real-time as the data is ingested and indexed.
  • being able to make business decisions as events unfold can provide a competitive advantage for serving both customers as well as optimizing business operations.
  • having a rich archive of information can provide the insurance provider with an opportunity to explore how events correlate with other business events. This ability to explore historical data in detail will provide for better business modeling, forecasting, and development of business rules that may be implemented to optimize business operations.
  • the opportunity is not just limited to claim operations as in this use case, but all aspects of the business involved in customer sales, service, retention, and business auditing and compliance.
  • FIG. 10 is an example method 1000 for processing insurance data. At least a portion of the method 1000 may be performed by one or more computing devices, in an embodiment. For example, the method 1000 may be performed by the stream ingestion component 959 in combination with the in memory WIP data store 953 and/or the search based application 966 as described with respect to FIG. 9 .
  • the computing device can receive (block 1005 ), from at least one source, a message related to an event.
  • the message can be data relating to a FNOL event.
  • the at least one source can be one or more various sensors associated with an insurance policy, a customer or agent, a service (e.g., NOAA), and/or a supporting business-to-business service or entity.
  • the message can be received as an AMQP message.
  • the computing device can generate (block 1010 ) a data object based on the message.
  • the data object can be a DTO.
  • the computing device can examine (block 1015 ), for example using complex event processing (CEP), the data object to determine that the event is an actionable event related to insurance data processing.
  • CEP complex event processing
  • the computing device can receive updated or additional messages or data from the at least one source and combine the original message with updated messages or data to determine that the event is an actionable event. For example, a message can be received from a weather service that notifies of a blizzard warning and an additional FNOL event message can be received from a customer reporting damage from a blizzard. As a result, the computing device can determine that the blizzard constitutes an actionable event. In some embodiments, the computing device can enrich the data object using low-latency indexing and/or semantic processing.
  • the computing device can retrieve (block 1020 ), from a historical data store using the data object, insurance data associated with at least one customer insurance policy, whereby the historical data store can be configured to store a plurality of customer insurance policies.
  • the at least one customer insurance policy can correspond to at least one policy that may be affected by the actionable event.
  • the computing device can examine the data object to identify a geographical area associated with the actionable event and can retrieve the at least one customer insurance policy having a location within or otherwise associated with the geographical area.
  • the computing device can store (block 1025 ) the insurance data in a cache memory.
  • the cache memory enables an effective and efficient retrieval of the insurance data.
  • the computing device can receive (block 1030 ), from a requesting entity, a request to access at least a portion of the insurance data.
  • the request can be received via a search based application and the requesting entity can be one or more of an insurance agent, business process researcher, or a claim handler.
  • the portion of the insurance data can correspond to one or more specific customer insurance policies and how the actionable event may potentially impact claims for the one or more specific customer insurance policies.
  • the computing device can retrieve (block 1035 ) at least the portion of the insurance data from the cache memory and provide (block 1040 ) at least the portion of the insurance data to the requesting entity.
  • the computing device can generate processed insurance data based on the data object (with or without enrichment according to the low-latency indexing and/or the semantic processing) and at least the portion of the insurance data. Accordingly, the requesting entity can search and analyze the processed insurance data in near real-time.
  • the computing device can determine (block 1045 ) that the actionable event is covered by the at least one customer insurance policy.
  • the computing device can generate (block 1050 ) a policy transaction for the at least one customer insurance policy wherein the policy transaction is based on the actionable event.
  • the policy transaction can be a pre-filled insurance form associated with a potential claim.
  • the computing device can submit (block 1055 ) the policy transaction to a claim system, such as a claim system associated with an insurance provider.
  • the computing device can also send (block 1060 ) data indicative of the policy transaction to the historical data store for storage therein. As a result, the historical data store can store updated data associated with the appropriate customer insurance policy.
  • FIG. 11 illustrates an example computing device 1115 (such as the stream ingestion component 959 and/or the in memory WIP data store 953 as described with respect to FIG. 9 ) in which the functionalities as discussed herein may be implemented.
  • the computing device 1115 can include a processor 1172 as well as a memory 1174 .
  • the memory 1174 can store an operating system 1176 capable of facilitating the functionalities as discussed herein as well as a set of applications 1178 .
  • one of the set of applications 1178 can be the search based application 966 as described with respect to FIG. 9 .
  • the processor 1172 can interface with the memory 1174 to execute the operating system 1176 and the set of applications 1178 .
  • the memory 1174 can also store data associated with insurance policies, any received telematics data or event data, and/or other data.
  • the memory 1174 can include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), cache memory, and/or other hard drives, flash memory, MicroSD cards, and others.
  • ROM read-only memory
  • EPROM electronic programmable read-only memory
  • RAM random access memory
  • EEPROM erasable electronic programmable read-only memory
  • cache memory and/or other hard drives, flash memory, MicroSD cards, and others.
  • the computing device 1115 can further include a communication module 1180 configured to communicate data via one or more networks 1110 .
  • the communication module 1180 can include one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and configured to receive and transmit data via one or more external ports 1182 .
  • the communication module 1180 can receive telematics data from one or more vehicles via the network 1110 and can receive any supplemental data or relevant data associated with driving tip models from a third party entity or component.
  • the computing device 1115 can transmit driving tips to vehicles via the communication module 1180 and the network(s) 1110 .
  • the computing device 1115 may further include a user interface 1184 configured to present information to a user and/or receive inputs from the user.
  • the user interface 1184 includes a display screen 1186 and I/O components 1188 (e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, speakers, microphones, and others).
  • I/O components 1188 e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, speakers, microphones, and others.
  • the user may access the computing device 1115 via the user interface 1184 to examine ingested data, examine processed insurance claims, and/or perform other functions.
  • a computer program product in accordance with an embodiment includes a computer usable storage medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code is adapted to be executed by the processor 1172 (e.g., working in connection with the operating system 1176 ) to facilitate the functions as described herein.
  • the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via C, C++, Java, Actionscript, Objective-C, Javascript, CSS, XML, and/or others).
  • AMQP Advanced Messaging Queuing Protocol
  • Commodity Computing refers to components based on open standards and provided by several manufacturers with little differentiation.
  • CEP Complex Event Processing
  • CMS Content Management System
  • Continuous Query refers to a means of actively applying rules to data changes, often in support of Complex Event Processing (CEP).
  • CEP Complex Event Processing
  • Coprocessor supplements the function of the CPU in a general purpose context.
  • Direct Attached Storage refers to a digital storage device (e.g., hard disk) that is directly connected (no network device) to a host.
  • Distributed Cache refers to both the means of caching data in transit to (write) and from (read) the database across a grid of servers, as well as the ability of such a scheme to address high-availability.
  • Distributed Operating System refers to software that manages the computing resources and provides common services where each node hosts a subset of the global aggregate operating system.
  • GUID Globally Unique Identifier
  • High-Availability (HA) Grid or Cluster refers to a group of computers that operate by providing reliable hosting of applications with graceful degradation and/or upgrade due to component failure or addition, respectively, but not at the expense of availability.
  • Availability is defined as the means to submit additional processing or manage existing processing.
  • Hadoop® Distributed File System is a component of the Hadoop® framework that manages storage of files in a fault tolerant and distributed fashion using replicated blocks across a set of data nodes.
  • Hadoop® Yet Another Resource Manager is a component of the Hadoop® framework that manages computing resources on the set of data nodes which are also used for computation.
  • High Performance Computing is characterized as needing large amounts of computing power over short periods of time, often expressed with tightly coupled low latency interconnects such as the Message Passing Interface (MPI).
  • MPI Message Passing Interface
  • Information Retrieval refers to inverted indexing and query of multi-structured data.
  • Linux is the operating system used to manage a node and its computational and file storage resources.
  • Log-Structured Merge Tree (LSM-tree) database is a high throughput optimized datastore.
  • Low Latency refers to a network computing delay that is generally accepted as imperceptible by humans.
  • MTC Many-Task Computing
  • MDM Master Data Management
  • Message Broker is used for enabling enterprise integration patterns used to integrate systems.
  • Multi-Structured data refers to an all-inclusive set of structured, semi-structured, and un-structured data.
  • Multi-Version Concurrency Control is a method used by databases to implement transaction history.
  • Object Transaction refers to a unit of work for any data change to an Object attribute recorded by the database.
  • Ontology is a set of semantic metadata from which unstructured data classification is based.
  • OpenFlow enables network connectivity using a communication protocol through a switch path determined by software.
  • SDN Software Defined Network
  • Stream Processing refers to the application of messaging for the purposes of addressing parallel processing of in-flight data used for Complex Event Processing (CEP).
  • CEP Complex Event Processing
  • Semantic Processing refers to the ability to bring meaningful search to enterprise search engines through natural language processing and associated content classification based on ontology.

Abstract

Methods and systems for processing data, such as insurance data, using a Web-Scale Data Fabric (WSDF). According to embodiments, a stream ingestion hardware component can ingest messages related to an actionable event and send data objects to an in-memory data store based on the messages. The in-memory data store can retrieve insurance policy information that may be applicable to the actionable event and store the insurance policy information in cache memory. A search-based application interfaces with the in-memory data store to search for and retrieve data associated with the insurance policy information, from which a user or administrator may process or otherwise access the information. The systems and methods can further initiate insurance claim processing as well as enrich the data using various techniques to enable real-time search.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 61/800,561, filed Mar. 15, 2013, which is incorporated by reference herein.
  • FIELD OF THE DISCLOSURE
  • The present disclosure relates to systems and methods for processing, storing, and accessing “big data,” and, more particularly, to platforms and techniques for search-based applications to extract information from a large dataset to perform operational activities and processing pertaining to the insurance industry.
  • BACKGROUND
  • The increasing usage of the Internet by individual users, companies, and other entities, as well as the general increase of available data, has resulted in a collection of data sets that is both large and complex. In particular, the increased prevalence and usage of mobile devices, sensors, software logs, cameras, microphones, radio-frequency identification (RFID) readers, and wireless networks have led to an increase in available data sets. This collection of data sets is often referred to as “big data.” Because of the size of the big data, existing database management systems and data processing applications are not able to adequately curate, capture, search, store, share, transfer, visualize, or otherwise analyze the big data. Theoretical solutions for big data processing require hardware servers on the order of thousands to adequately process big data, which would result in massive costs and resources for companies and other entities.
  • Companies, corporations, and the like are starting to feel the pressure to effectively and efficiently process big data. In some cases, users are more often expecting instantaneous access to various information resulting from big data analyses. In other cases, companies feel the need to implement big data processing systems in an attempt to gain an edge on their competitors, as big data analyses can be beneficial to optimizing existing business systems or products as well as implementing new business systems or products. For example, there is a need for insurance providers to analyze big data in an effort to create new insurance products and policies, refine existing insurance products and policies, more accurately price insurance products and policies, process insurance claims, and generally gather more “intelligence” that can ultimately result in lower costs for customers.
  • Accordingly, there is an opportunity to implement systems and methods for processing big data related to insurance applications, products, and data.
  • SUMMARY
  • One embodiment of the techniques discussed herein relates to a system for insurance data processing. The system comprises a stream ingestion hardware component adapted to receive data relating to an actionable event and configured to generate a data object based on the received data, a historical data store adapted to store a plurality of customer insurance policies, and a work-in-process (WIP) data store adapted to communicate with the stream ingestion hardware component and with the historical data store. The WIP data store is configured to receive the data object from the stream ingestion hardware, and use the data object to retrieve, from the historical data store, insurance data associated with at least one of the plurality of customer insurance policies. The system further comprises a search application adapted to interface with the WIP data store and configured to receive, from a requesting entity, a request to access at least a portion of the insurance data, retrieve at least the portion of the insurance data from the WIP data store, and provide at least the portion of the insurance data to the requesting entity.
  • Another embodiment of the techniques discussed herein relates to a method of processing insurance data. The method comprises receiving, from a source, a message related to an event, generating a data object based on the message, and examining the data object to determine that the event is an actionable event related to insurance processing. Responsive to examining the data object, the method further retrieves, from a historical data store using the data object, insurance data associated with at least one customer insurance policy and stores the insurance data in a cache memory. Additionally, the method comprises receiving, from a requesting entity, a request to access at least a portion of the insurance data, responsive to receiving the request, retrieving at least the portion of the insurance data from the cache memory, and providing at least the portion of the insurance data to the requesting entity.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an exemplary web-scale grid on which a web-scale data processing method may operate in accordance with some embodiments;
  • FIG. 2 is a block diagram of an exemplary web-scale federated database on which a web-scale data storage method may operate in accordance with some embodiments;
  • FIG. 3 is a block diagram of an exemplary web-scale stream processor on which a web-scale data storage stream processing method may operate in accordance with some embodiments;
  • FIG. 4 is a block diagram of an exemplary web-scale data-local processor on which a web-scale data-local processing method may operate in accordance with some embodiments;
  • FIG. 5 is a block diagram of an exemplary web-scale data fabric system on which a web-scale information retrieval method may operate in accordance with some embodiments;
  • FIG. 6 is a block diagram of an exemplary web-scale master data management system on which a web-scale master data management method may operate in accordance with some embodiments;
  • FIG. 7 is a block diagram of an exemplary web-scale data fabric system on which a web-scale analytics method may operate in accordance with some embodiments;
  • FIG. 8 is a block diagram of an exemplary web-scale data fabric system on which a web-scale search-based application method may operate in accordance with some embodiments;
  • FIG. 9 illustrates an exemplary use case for processing insurance data in accordance with some embodiments;
  • FIG. 10 is a flow diagram illustrating an exemplary method of processing insurance data in accordance with some embodiments; and
  • FIG. 11 is a block diagram of a computing device in accordance with some embodiments.
  • DETAILED DESCRIPTION
  • Many companies, corporations, firms, and other entities, including large software vendors, are investing heavily in improved technologies to capitalize on the potential value of processing and analyzing large data sets commonly referred to as “big data.” In general, processing big data may be accomplished in one of two ways. The first way seeks to supplement present relational database and data movement technologies with Web-proven technologies such as Apache™ Hadoop®. The second seeks to adopt an approach that various Web companies have with non-relational databases, along with implementing processing that moves function-to-data on commodity hardware and open source software. The lure of the first approach is that companies can depend on large software vendors and familiar technologies to evolve toward web-scale processing. The lure of the second approach is that it can be scaled and is more economical than current relational database or data movement technology techniques.
  • Hands-on experimentation with these “big data” technologies indicates that the second, Web company approach appears viable and shows the promise of economic benefit in hardware as well as in software development for both operational and analytical solutions. In terms of hardware, a grid of commodity hardware, not much different from desktop PCs, can be architected to address computational storage and network applications necessary to achieve data processing at web-scale. In terms of software development, non-relational databases offer less complex data structure. Additionally, function-to-data processing avoids data movement complexity which can translate to reduced development time and cost.
  • Web companies have also proven that data center networking requirements can be achieved with commodity hardware. In particular, the use of software-defined networking (SDN) associated with the OpenFlow communications protocol can be used to isolate network traffic. Further, an application of an Intel® coprocessor can enable high performance computing. The Intel® coprocessor, for example the Xeon Phi™ coprocessor, holds the promise of reducing software development complexity for high performance computing relative to real-time processing solutions. SDN can be used in combination with the coprocessor in cases in which the SDN isolates network traffic resulting from high performance computing from other, non-high performance computing network traffic. These additional gains in network and coprocessor technologies can also translate into data center power savings.
  • Generally, function-to-data processing can be employed in non-relational databases and in-database processing can be employed in relational databases. Various hands-on research indicates that business intelligence vendors are introducing function-to-data processing on technologies such as Apache™ HBase™ and Apache™ Hadoop®. These advancements can efficiently and effectively bring big data capabilities within reach of business partners.
  • Hands-on experimentation also demonstrates that the combination of search technologies and search-based applications on multi-structured data in non-relational databases provides a similar user experience to that of, for example, a Google® search on the Web. Multi-structured data can be a combination of unstructured, semi-structured, and structured data. These function-to-data and search advancements can enable business users to easily and economically access big data.
  • The embodiments and portions of exemplary embodiments as discussed herein are collectively referred to as the Web-Scale Data Fabric (WSDF). Although the embodiments as discussed herein are related to processing insurance data, it should be appreciated that the WSDF can be employed across other industries and their verticals such as, for example, finance, technology, healthcare, consulting, professional services, and/or the like.
  • It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term ‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based on any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this disclosure is referred to in this disclosure in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based on the application of 35 U.S.C. §112, sixth paragraph.
  • Accordingly, the term “insurance policy,” as used herein, generally refers to a contract between an insurer and an insured. In exchange for payments from the insured, the insurer pays for damages to the insured which are caused by covered perils, acts or events as specified by the language of the insurance policy. The payments from the insured are generally referred to as “premiums,” and typically are paid on behalf of the insured over time at periodic intervals. The amount of the damages payment is generally referred to as a “coverage amount” or a “face amount” of the insurance policy. An insurance policy may remain (or have a status or state of) “in-force” while premium payments are made during the term or length of coverage of the policy as indicated in the policy. An insurance policy may “lapse” (or have a status or state of “lapsed”), for example, when premium payments are not being paid, when a cash value of a policy falls below an amount specified in the policy (e.g., for variable life or universal life insurance policies), or if the insured or the insurer cancels the policy.
  • The terms “insurer,” “insuring party,” and “insurance provider” are used interchangeably herein to generally refer to a party or entity (e.g., a business or other organizational entity) that provides insurance products, e.g., by offering and issuing insurance policies. Typically, but not necessarily, an insurance provider may be an insurance company.
  • Typically, a person or customer (or an agent of the person or customer) of an insurance provider fills out an application for an insurance policy. The application may undergo underwriting to assess the eligibility of the party and/or desired insured article or entity to be covered by the insurance policy, and, in some cases, to determine any specific terms or conditions that are to be associated with the insurance policy, e.g., amount of the premium, riders or exclusions, waivers, and the like. Upon approval by underwriting, acceptance of the applicant to the terms or conditions, and payment of the initial premium, insurance policy may be in-force, e.g., the policyholder is enrolled.
  • It should be appreciated that the configurations of the hardware components as illustrated in FIGS. 1-8 are merely exemplary and can include different combinations and aggregations of components. For example, the data nodes, application cache nodes, access nodes, and index nodes and any software associated therewith as depicted in some or all of FIGS. 1-8 may be combined into one or more hardware components to perform the functionalities as described herein. It should be appreciated that other combinations of components are envisioned.
  • Section 1: Web-Scale Grid
  • Referring now to FIG. 1, a configuration design 100 that depicts the core of the WSDF implementing an elastic platform or “Grid.” This Grid is composed of a plurality of nodes (1-n) (105) that can be networked through software-defined connectivity. According to embodiments, “elastic” refers to the ability to add or remove nodes 105 within the Grid to accommodate capacity requirements and/or failure replacement. Although only two nodes 105 are depicted in FIG. 1, it should be appreciated that other amounts of nodes 105 are envisioned.
  • In embodiments, each node 105 can be designed to be equipped with a mid-range multi-core central processing unit (CPU) 106, direct-attached storage (DAS) 107 consisting of a set of drives sometimes referred to as “just a bunch of disks” (JBOD), random access memory (RAM) 108, and one or more coprocessor cards 109. The precise configuration of each node 105 can depend on its purpose for addressing web-scale requirements. Networking between nodes 105 is enabled with a networking device (such as a network switch 110 as shown in FIG. 1) where connectivity can be defined with software. The precise configuration of network connectivity depends on the purpose for addressing web-scale requirements.
  • For each node 105 to operate on the Grid, a stack of software 111 may be advantageous. In embodiments, the software stack 111 is designed to provide the kernel or operating system for the Grid. According to embodiments, the software stack 111 can be configured to include Linux 2.6+64 bit framework, the Hadoop® 2.0+ framework, and/or other frameworks. The precise stack configuration for each node 105 depends on the purpose for addressing web-scale requirements. It should be appreciated that the software stack 111 can include other frameworks or combinations of frameworks.
  • The combination of the mid-range multi-core CPU 106, the coprocessor card 109, the RAM 108, and a software-defined network (SDN) 112 can provide the computational capabilities for the Grid. It should be appreciated that additional coprocessor cards 109 and/or nodes 105 can enable additional computing scale. In some configurations, this computational design can be a hybrid of high-performance computing (HPC) and many-task computing (MTC) grids. In some embodiments, the Apache™ Hadoop® YARN sub-project can enable the coexistence of HPC and MTC computation types within the same Grid.
  • This hybrid design can be further enhanced through the use of the SDN 112 as well as a mid-range multi-core CPU. According to embodiments, the SDN 112 can be used to isolate the network connectivity requirements for computation types from other competing network traffic. It is expected that this configuration may facilitate lower cost computing and network connectivity, along with lower power demands per flop.
  • The DAS 107 on each of the nodes 105 can be made available through the Apache™ Hadoop® Distributed File System (HDFS), combined with the SDN 112, to provide the storage capabilities for the Grid. Additional drives and/or nodes with drives can enable additional storage scale. The SDN 112 can be used to isolate the network connectivity requirements for storage from other competing network traffic. It is expected that this configuration or configurations similar thereto can facilitate lower cost network connectivity associated with storage per gigabyte.
  • The network devices used within the Grid are designed for operation using the OpenFlow protocol. OpenFlow combined with the SDN 112 can be referred to herein as a Network Operating System (NOS) 115. It is expected that this configuration of the NOS 115 can facilitate lower cost network devices and lower power demands.
  • In general, it should be appreciated that the web-scale Grid uses the SDN 112 to manage connectivity and uses the coprocessor 109 accelerator for distributed parallel computation. In particular, the CPU 106 can be used in combination with the coprocessor 109 for horizontal and vertical scaling to provide distributed parallel computation. Similarly, the web-scale Grid can facilitate storage using both DAS and RAM, whereby the combination of the coprocessor 109 and the storage enables the Grid to achieve web-scale.
  • Section 2: Web-Scale Federated Database
  • Referring now to FIG. 2, a federated database design 200 is deployed on the web-scale grid as discussed with respect to FIG. 1. According to embodiments, the federated database can be designed and configured for storage of transactions with low latency. From a hardware perspective, there are several nodes configured to store various data and several nodes configured to implement an in-memory cache. The number of nodes can be directly related to the scalability requirements for storage or low latency data ingestion.
  • One or more in-memory caches 225 can be designed and configured for distribution across one or more various data centers 219, thus enabling a distributed cache. By spanning data centers across a wide area network (WAN), the Grid can be positioned for high availability despite a disaster or disruption within any given data center 219. In particular, object transaction data that originates from either machine sources 220 (such as a home or automobile) or applications 221 is stored within the in-memory cache 225 before being asynchronously relayed and replicated using data transfer objects (DTO) to a log-structured merge-tree (LSM-tree) database 226 within each data center 219. Apache™ HBase™ is an example of a LSM-tree database. According to embodiments, the in-memory cache 225 plus the LSM-tree database 226 per data center 219 can comprise the federated database. In some embodiments, the LSM-tree databases 226 can be optimized for throughput to support low latency data ingestion.
  • DTOs can be enhanced with a timestamp as they are relayed to the LSM-tree databases 226 in each data center 219. The timestamp combined with a globally unique identifier (GUID) for the corresponding DTO can provide the basis for a multi-version concurrency control dataset (MCC). Transactions are stored with the MCC where each change to the DTO is appended. The resulting transaction history facilitates a point-in-time rollback of any given object transaction. In some embodiments, the internal MCC data design is independent of the type of database, thus enabling portability across other LSM-tree databases.
  • Storage of data in the LSM-tree databases 226 can be designed and configured such that object transaction data can be range-partitioned for distribution across the apportioned Grid nodes. This range partitioning can be based on the GUID and timestamp key concatenation. Each object transaction can also be designed for optimized storage, with or without encoding. For implementations utilizing HBase™, the column family and column descriptor can be encoded. In some cases, codes, descriptions, and other metadata such as data type and length can be stored separately in a cross-reference table. The object transaction or DTO can then be (de)serialized and mapped into LSM-tree database data types. For implementations using HBase™, the DTO can be (de)serialized into a tuple where each column can be represented in byte arrays.
  • As transactional data is accessed, the in-memory cache 225 can be designed and configured to evict the least recently used (LRU) data. When a transaction is requested by an application using a given GUID and that transaction is no longer in cache, the in-memory cache 225 can be designed to perform an on-demand read-through from the LSM-tree database 226, with an affinity toward the database within the same data center 219 (if available).
  • As object transactions are atomically persisted within the in-memory cache 225, they can be replicated across the data centers 219. The federated database design pattern can take advantage of eventual consistency to provide availability that spans multiple data centers without being dependent on database log-based replication.
  • In addition to the storage of transactions described above, the federated database design 200 can also provide storage for multi-structured data ingested through streaming. See the Web-Scale Stream Processor (Section 3) for additional details regarding this implementation.
  • According to embodiments, the web-scale federated database design 200 utilizes an in-memory key value object cache in concert with the LSM-tree databases 226 for low latency transaction ingestion with consistency in cache to eventual consistency among the LSM-tree databases 226 across the data centers 219. In addition, the web-scale federated database design 200 utilizes MCC on multi-structured data for “discovery-friendly” analytics with positioning for automated storage optimization.
  • Section 3: Web-Scale Stream Processor
  • Referring now to FIG. 3, an extension to both the Web-Scale Grid (Section 1) and the Web-Scale Federated Database (Section 2) is a stream processor implementation 300. According to embodiments, the stream processor implementation 300 is designed and configured to ingest and process multi-structured data in-stream with low latency through messaging. To enable the stream processor implementation 300 from a hardware perspective, several nodes can be leveraged with memory (for processing) and combined with storage (for high availability). In particular, nodes can be grouped into clusters with a design configuration that federates clusters across data centers 319 to manage capacity while addressing availability in case of disaster at any of the data centers 319.
  • For the stream processor implementation 300 to facilitate processing of data, the design utilizes the advanced message queuing protocol (AMQP) open standard. According to embodiments, AMQP enables interoperability as well as support for the ingestion of multi-structured data.
  • According to embodiments, messages are ingested through AMQP brokers hosted on federated clusters of the web-scale grid nodes. In particular, two types of clusters are used: a front office cluster 325 and a back office cluster 326. The front office cluster 325 can address low latency ingestion and processing facilitated primarily with RAM. The back office cluster 326 can address processing with less demanding latency facilitated primarily with DAS. One of each cluster type is enabled within the corresponding data center 319. Messages ingested with the front office cluster 325 are published to all back office clusters 326 within each data center 319 to enable high availability in case of disaster.
  • In some embodiments, messages can be processed by consumers that subscribe to queues. For example, for complex event processing (CEP), consumers are designed to work with an in-memory distributed cache. Referring to FIG. 3, the CEP functionalities may be implemented by the CEP cluster 327. This in-memory distributed cache used within the stream processor implementation 300 is shared with the web-scale federated database. When working with in-memory cache, data can be accessed using continuous query processing to determine occurrences of predefined events.
  • CEP is also designed to work with semantic processing software for classifying unstructured data in messages. That classification is subsequently published to another queue for further processing. An example of semantic classification software is Apache™ Stanbol™.
  • The stream processing implementation 300 is further configured to store messages on the web-scale federated database. In some embodiments, the message storing functionality can be also addressed with consumers on queues associated with the back office cluster 326. These consumers are designed to operate in batch through a scheduler compatible with the web-scale federated database. An example scheduler could be Hadoop® YARN.
  • The stream processor implementation 300 is further designed to amass ingested messages for independent subsequent processing while providing interoperability and extensibility through open messaging for multi-structured data. The stream processor implementation 300 can use in-memory cache in concert with AMQP messaging for low latency CEP. CEP is also designed to work with semantic processing software for classifying unstructured data in messages.
  • Section 4: Web-Scale Data-Local Processor
  • Referring now to FIG. 4, an implementation 400 includes the web-scale grid as discussed with respect to Section 1 deployed on the web-scale federated database as discussed with respect to Section 2. The data-local processor is designed to enable concurrent, distributed, and parallel computation of the data residing on web-scale grid nodes (as discussed with respect to Section 1), through use of common statistical and semantic classification software.
  • Referring to FIG. 4, one or more data-local processor nodes 405 are designed to enable statistical software operations using either a high performance computing (HPC) with message passing interface (MPI) and/or many-task computing (MTC) with a Map Reduce (MR) programming or computational model. Each of the nodes 405 is equipped with a combination of a mid-range multi-core CPU 406, one or more coprocessor cards 409, and RAM 408 for computation on data local to the corresponding node 405. Each of the nodes 405 is also equipped to facilitate the execution of semantic classification software. An example of statistical software is “R” and an example of semantic classification software is Apache™ Stanbol™.
  • The data-local processor nodes 405 are further designed to enable software-defined network (SDN) connectivity in support of computational capabilities. Network connectivity management and operation with the SDN can provide a more effective means for enabling both programming and/or computational models to operate on the same set of nodes within the web-scale grid. In some embodiments, computation can be orchestrated with corresponding client software on a client workstation. Further, statistical programs and ontologies can be deployed from this client workstation.
  • According to embodiments, the web-scale data-local processor implementation 400 can utilize a combination of high-performance computing (HPC) and many-task computing (MTC) facilitated by SDN, the one or more coprocessor cards 409, and/or data locality based-computation with direct-attached storage (DAS). As discussed herein, the CPU 406 can be used in combination with the one or more coprocessor cards 409 for horizontal and vertical scaling to provide distributed parallel computation. Similarly, the web-scale Grid can facilitate storage using both DAS and RAM, whereby the combination of the one or more coprocessors 409 and the storage enables the Grid to achieve web-scale. Further, the use of RAM as a cache of DAS can enable data-local computation.
  • Section 5: Web-Scale Information Retrieval
  • Referring now to FIG. 5, a web-scale information retrieval implementation 500 positions the web-scale federated database (as discussed with respect to Section 2) for content management of multi-structured data along with the web-scale data-local processor (as discussed with respect to Section 4) to facilitate content classification and indexing. In some embodiments, the information retrieval implementation 500 can address index processing using the Apache™ Lucene™ software operating with a data-local processor. In some cases, content processed by the information retrieval implementation 500 can be ingested through the web-scale stream processor (as discussed with respect to Section 3). As shown in FIG. 5, portions of the implementation 500 may be facilitated by one or more information retrieval applications 530.
  • The information retrieval implementation 500 can be designed to index content incrementally as it is stored on the federated database. Access to indexes for search queries can be enabled through additional nodes that extend the web-scale grid with an additional cluster. Generated index files can be copied to this search cluster and managed periodically. For low latency indexing applications, content can be indexed on insert into the federated database, while the index cluster is updated.
  • According to embodiments, the generated index can reference content in the federated database. Search query results can include content descriptions along with a key for retrieval of content from the federated database. This content key can be the basis for retrieval of data from the federated database.
  • The index cluster can process queries using, for example, the SolrCloud™ software. Each node 505 can contain index replicas and can be designed and configured to operate with high availability. In some embodiments, the number of nodes in the index cluster can be relative to the extent of search queries and volume of users.
  • According to embodiments, each data center can include the described layout of index and search functionalities. The combined deployment across data centers for information retrieval can provide availability resilience in disaster situations affecting an entire data center. The design and configuration of the information retrieval implementation 500 can provide low latency indexing and search across all multi-structured data and content. Further, the design and configuration of the information retrieval implementation can provide the basis for search-based applications (SBA) to address development of both operational and analytic applications.
  • Section 6: Web-Scale Master Data Management
  • Referring now to FIG. 6, a data management implementation 600 includes the web-scale grid (as discussed with respect to Section 1), the web-scale federated database (as discussed with respect to Section 2), the web-scale stream processor (as discussed with respect to Section 3), the web-scale data-local processor (as discussed with respect to Section 4), and the web-scale information retrieval (as discussed with respect to Section 5). According to embodiments, the data management implementation 600 can facilitate the collection and processing of data at extreme scale despite variety, velocity, and/or volume at a high-availability and/or disaster-recovery service level. To build business capability with the data management implementation 600, the data can be arranged and architected for management by the corresponding business, an architecture practice generally referred to as master data management. Accordingly, in some cases, the Web-Scale master data management implementation 600 can be the data architecture atop the web-scale platform.
  • In embodiments, data can be arranged according to its source within the federated database and, through the use of multi-version concurrency control (MCC) data design, can contain a log or history of known changes. Because information retrieval indexing can be designed to span numerous types of data, including history, and regardless of source, data can be easily accessed via a search.
  • In order to enable transactions executed in the conduct of business, the acquisition of contextual reference data may be advantageous. For example, an insurance claim may reference the primary named insured, claimant, vehicle, peril, and/or the policy. In some embodiments, search can be the method for acquiring the required reference data for transactions.
  • According to embodiments, assessing the quality of master data is integral to the management of the data. In particular, faceted search can be the vehicle for identifying duplicate data occurrences as well as examining spelling variances that may affect data quality. The master data management implementation 600 can provide the architecture needed to map all ingested data with corresponding search indexes. In particular, the master data management implementation 600 can utilize search-based master data retrieval across various multi-structured data. Additionally, the master data management implementation 600 can utilize classification enabled with facets to provide metrics for data quality assessments.
  • Section 7: Web-Scale Analytics
  • Referring now to FIG. 7, a web-scale analytics implementation 700 builds on the web-scale grid (as discussed with respect to Section 1), the web-scale federated database (as discussed with respect to Section 2), the web-scale stream processor (as discussed with respect to Section 3), the web-scale data-local processor (as discussed with respect to Section 4), the web-scale information retrieval (as discussed with respect to Section 5), and the web-scale master data management (as discussed with respect to Section 6). The web-scale analytics implementation 700 can leverage the stream processor and/or the data-local processor to compute aggregates, depending on latency requirements. In particular, aggregates that are routinely used can be periodically pre-computed and stored in the federated database for shared access. These pre-computed aggregates can also be indexed and accessed through information retrieval and/or correlated with master data using master data management. On-the-fly aggregates can depend on grid memory and coprocessors for computation, as well as speed-through concurrency, data locality, and/or computation as data is in-flight.
  • The web-scale analytics implementation 700 can be designed for consumption through interactive visualizations. These visualizations can be generated using business intelligence (BI) tools. In some embodiments, BI tools can be hosted on a number of nodes that extend the grid. These BI tools can also be designed and configured to provide self-service (i.e., user-defined) function-to-data aggregate processing using the data-local processor.
  • According to embodiments, pre-computed aggregates can also be designed for transfer and storage to a columnar store. In some cases, columnar storage can provide economy-of-scale and can be well-suited for speed-of-thought analytics. This columnar store can be positioned for the interim to provide continuity for BI tools that operate with SQL. It should be appreciated that equivalent speed-of-thought analytics for use within the federated database are envisioned. A nested columnar data representation within the federated database can be positioned as the replacement for a columnar store.
  • According to embodiments, the web-scale analytics implementation 700 can utilize stream processing and data-local processing to compute data aggregations, and can choose the optimal processing method based on latency requirements. In particular, the web-scale analytics implementation 700 can enable self-service (i.e., user-defined) data-local processing for analytics. Further, the web-scale analytics implementation 700 can store pre-computed aggregates in a columnar store for continuity with current business intelligence (BI) tools, as well as provide speed-of-thought interactive visualizations at an economy-of-scale.
  • Section 8: Web-Scale Search-Based Application
  • Referring now to FIG. 8, illustrated is a web-scale search-based implementation 800 that can include the web-scale grid (as discussed with respect to Section 1), the web-scale federated database (as discussed with respect to Section 2), the web-scale stream processor (as discussed with respect to Section 3), the web-scale data-local processor (as discussed with respect to Section 4), the web-scale information retrieval (as discussed with respect to Section 5), the web-scale master data management (as discussed with respect to Section 6), and the web-scale analytics (as discussed with respect to Section 7). According to embodiments, the web-scale search-based implementation 800 can be used to build both operational and analytic applications. The type of applications which best utilizes the web-scale search-based implementation 800 can be referred to as a web-scale search-based application 840.
  • Search functionality can add another dimension to the design of these web-scale search-based applications, particularly with the build for master data management as well as the basis for navigating analytics. In some embodiments, the design for search-based applications can leverage information retrieval functionalities.
  • Some applications that are operational for processing transactions and/or facilitating applications used for analytics can be addressed through a search-based application design. This combination is distinct from other search-based design applications that are primarily analytical. The search-based implementation 800 is also unique in that it includes the data-local processor and stream processor for generating analytics whereas existing designs rely on analytics provided by a search engine and/or an analytic tool that moves data-to-function.
  • The search-based application 840 can be developed using information retrieval and analytics graphic user interface (GUI) components. These GUI components are enabled with software development kits. The assembled GUI can be a mash-up of visualizations from analytics and facetted navigation from information retrieval.
  • The same features noted for master data management are applicable with the search-based application 840. In particular, lookup functionalities of reference data to associate with a transaction may be expected for operational applications. Further, visualization of data quality metrics for master data may be expected to include integration with analytics.
  • According to embodiments, the search-based application 840 may integrate analytic computations such as scoring an insurance claim for potential special investigation, displaying a targeted advertisement, and/or other functionalities. Development of these analytic computations applied with the data-local processor and stream processor can take advantage of distributed parallel or concurrent computing with data locality or function-to-data processing. This development approach may leverage either a high performance computing (HPC) with message passing interface (MPI) and/or many-task computing (MTC) with the Map Reduce (MR) programming/computational model.
  • When deployed, the GUI components of the search-based application 840 can leverage an extension to the Grid. The extension includes a set of nodes that host the application on containers within web application servers. These web application servers can be designed and configured to take advantage of in-memory cache for managing web sessions and to provide high availability across the data centers.
  • The search-based application 840 can include various applications to use the data storage, ingestion, and analysis systems and methods discussed herein to enable a user to perform and/or automate various tasks. For example, it may be advantageous to use a web-scale search-based application to assist with filling out and/or verifying insurance claims.
  • According to embodiments, the search-based application 840 can be configured to fill out an insurance claim and may also leverage the techniques discussed herein to streamline the process of filling out an insurance claim. For example, if a hail storm occurs in Bloomington, Ill. on May 3, various news stories, posts on social networks, blog posts, etc. will likely be written about the storm. These stories and posts may be directly on point (e.g., “A hailstorm occurred in Bloomington today”) or may indirectly refer to the storm (e.g., “My car windshield is broken #bummer”). Using the techniques discussed above, these stories, posts, and data may be identified and analyzed using complex event processing (CEP) to determine whether a storm occurred over a particular area and/or whether the storm was severe enough to cause damage. For example, analytics may determine whether the “Bloomington” of the first post refers to Bloomington, Ill. or Bloomington, Ind. by determining whether words and metadata (e.g., IP address) associated with the post are more proximate to Illinois or Indiana. Additionally, if multiple posts and stories discuss damage to property in a timeframe on or shortly after May 3, analytics may be used to estimate the likelihood and extent of damage. Further, the originally unstructured and semi-structure data from these posts and stories that have been ingested with the web-scale stream processor (as discussed with respect to Section 3) may be analyzed with structured data (e.g., telematics data, information from insurance claims, etc).
  • Accordingly, when example customer John Smith begins to fill out an insurance claim, a web-scale search-based application 840 that is configured to fill out an insurance claim may compare information from these analytics to information associated with John Smith (e.g., his Bloomington, Ill. home address, the telematics data from his truck indicating that multiple sharp forces occurred at the front of the vehicle, and/or other data) to determine that the insurance claim likely relates to hail damage and to automatically populate the fields in an insurance form associated with the claim and relating to cause and extent of damage. Similarly, a web-scale search-based application that is configured to verify claims can determine whether a cause and/or an extent of damage (or other aspects of an insurance claim) are within a likely range based on analysis of structured, semi-structured, and unstructured data using the WSDF.
  • It should be appreciated that web-scale search-based applications can address development of both operational and analytic applications. In particular, web-scale search-based applications can utilize search-based master data retrieval for transactional reference data. Further, web-scale search-based applications can utilize facetted navigation of multi-structured data with information retrieval. Additionally, the web-scale search-based applications can combine stream processing and data-local processing for aggregation, depending on latency requirements.
  • Section 9: Web-Scale Data Fabric Use Case
  • Referring now to FIG. 9, an example use case 900 described in this section will serve to provide a more detailed example of how the unique capabilities of the WSDF architecture may be used to enable the company or business to be more competitive, such as by streamlining insurance data initiation and processing. According to embodiments, the use case 900 described herein can be a subset of a larger use case originally designed for both business consumption (e.g., insurance operations) and to manage the infrastructure (e.g., IT systems operational) of the WSDF. In some embodiments, the use case 900 can be designed using a concept known as visual interactive intelligent infrastructure (VI3). The remainder of this disclosure will refer to the use case 900 as VI3-B, with the “B” suffix being used to emphasize the business consumption aspect of the use case 900. It should be appreciated that the use case 900 may be designed using other techniques or concepts.
  • The business competitive advantage of VI3-B is the ability to prepopulate information in forms for a potential insurance claim based upon either a machine- or customer-generated event notification, as well as perform post-processing analytics. In embodiments, having potential insurance information prepopulated saves both the insurance customer and the insurance provider from the time burden of manually entering information to activate a claim. Another advantage of VI3-B is the ability to provide proactive notification to business-to-business (B2B) services of the potential impact to their businesses should the event trigger be related to a mega-claim type of event.
  • The example use case 900 scenario starts with a significant hail storm 950, triggering an event notification received from a streamed feed by the National Oceanic and Atmospheric Administration (NOAA) 951. The event notification is ingested as an AMQP message 952 and interpreted as an actionable event. The AMQP message 952 is sent as a DTO 954 to an in memory data store for work-in-process (WIP) 953. Complex event processing (CEP) of the memory data store 953 can use a continuous query capability to identify the actionable event as a trigger to request that all (or some) current policy holder information within the geographical area of the hail storm be transferred from a historical data store 955 (e.g., LSM-tree and MCC database) to the in memory WIP data store 953 as a cached data object 960. Once the data object has been cached, the WIP data store 953 can initiate pre-population for a potential claim submission and store the potential claim submission in cache. In embodiments, this transfer of data from the historical data store 955 to the in memory WIP data store 953 may be efficiently managed through operational policies defined to manage the software defined network (SDN).
  • Referring to the example use case 900 of FIG. 9, damage from the hail storm to autos, homes or other items 957 covered for the customers (i.e., policy holders) may also trigger a first notice of loss (FNOL) event 958 through, for example, automatic sensor-based detection or from a customer contact received about a loss from the hail storm. The customer contact may be an email, text message, photo, video, phone call, and/or the like. The FNOL is ingested by a stream ingestion component 959 as an AMQP message and interpreted as an actionable event. The AMQP message is sent as a DTO (954) to the in memory WIP data store 953. The CEP of the in memory WIP data store 953 can identify this actionable event as a trigger to attempt to match the FNOL information to one of the cached policies 960. In some embodiments, data from additional entities 962 such as various business-to-business supporting services may also provide information related to various events that may necessitate insurance claim processing. The in memory WIP data store 953 may process the data from additional entities 962 and match the data to one or more of the cached policies 960.
  • Assuming the FNOL is matched (for example using a GUID) to a valid one of the cached policies 960, the pre-populated object transaction is updated to reflect the receipt of FNOL and to submit a transaction to a claim system (as illustrated by 961).
  • As information related to the hail storm is continuously stream processed by the message broker into distributed cache of the in memory WIP data store 953, the information is further enriched for information retrieval through low-latency indexing and semantic processing to allow the information to be searched and analyzed in near real-time and with proper context. In some embodiments, the near real-time indexing and searching capabilities in the WSDF can be enabled by using Lucene™/Solr™ and/or coprocessors.
  • Once the data is enriched, various end users from various groups such as agency 963, claims 964, and/or business process researchers 965 may use the search based application 966 to gain further insight into insurance policies and the processing and/or initiation thereof. For example, the agent 963 may want to query how the hail storm may be impacting his or her book of business. For further example, the claim handler 964 may want to query to assess the storm's impact on financial reserves or estimate (e.g., using historical and analytical data stores) the number of claim handlers needed to manage a response to a large or mega claim event. Further, for example, business process researchers 965 may want to assess how well claims were processed from the FNOL event to claim close.
  • Additionally, in the event of a mega claim, the loss data that is collected from the storm could be used to assist various B2B services to prepare them for better servicing policy holders to recover from losses.
  • In embodiments, the master data management (MDM) capabilities can be used to ensure data integrity and consistency of policy holder data cached as a result of the hail storm event, for example by updating in the in memory WIP data store 953 and writing back updated policy information 956 to the historical data store 955. Further, multi-version concurrency control (MCC) can be used to ensure the consistency of the historical data store 955, whereby this same level of integrity and consistency is replicated between a WSDF data center replica entity 967.
  • The technical capabilities of WSDF can provide the insurance provider with an opportunity to act upon information in near real-time as the data is ingested and indexed. In particular, being able to make business decisions as events unfold can provide a competitive advantage for serving both customers as well as optimizing business operations. Additionally, having a rich archive of information can provide the insurance provider with an opportunity to explore how events correlate with other business events. This ability to explore historical data in detail will provide for better business modeling, forecasting, and development of business rules that may be implemented to optimize business operations. The opportunity is not just limited to claim operations as in this use case, but all aspects of the business involved in customer sales, service, retention, and business auditing and compliance.
  • FIG. 10 is an example method 1000 for processing insurance data. At least a portion of the method 1000 may be performed by one or more computing devices, in an embodiment. For example, the method 1000 may be performed by the stream ingestion component 959 in combination with the in memory WIP data store 953 and/or the search based application 966 as described with respect to FIG. 9.
  • The computing device can receive (block 1005), from at least one source, a message related to an event. In some embodiments, the message can be data relating to a FNOL event. Further, the at least one source can be one or more various sensors associated with an insurance policy, a customer or agent, a service (e.g., NOAA), and/or a supporting business-to-business service or entity. Additionally, the message can be received as an AMQP message. The computing device can generate (block 1010) a data object based on the message. In embodiments, the data object can be a DTO. The computing device can examine (block 1015), for example using complex event processing (CEP), the data object to determine that the event is an actionable event related to insurance data processing. It should be appreciated that the computing device can receive updated or additional messages or data from the at least one source and combine the original message with updated messages or data to determine that the event is an actionable event. For example, a message can be received from a weather service that notifies of a blizzard warning and an additional FNOL event message can be received from a customer reporting damage from a blizzard. As a result, the computing device can determine that the blizzard constitutes an actionable event. In some embodiments, the computing device can enrich the data object using low-latency indexing and/or semantic processing.
  • The computing device can retrieve (block 1020), from a historical data store using the data object, insurance data associated with at least one customer insurance policy, whereby the historical data store can be configured to store a plurality of customer insurance policies. In particular, the at least one customer insurance policy can correspond to at least one policy that may be affected by the actionable event. As an example, the computing device can examine the data object to identify a geographical area associated with the actionable event and can retrieve the at least one customer insurance policy having a location within or otherwise associated with the geographical area.
  • The computing device can store (block 1025) the insurance data in a cache memory. According to embodiments, the cache memory enables an effective and efficient retrieval of the insurance data. The computing device can receive (block 1030), from a requesting entity, a request to access at least a portion of the insurance data. In some embodiments, the request can be received via a search based application and the requesting entity can be one or more of an insurance agent, business process researcher, or a claim handler. Further, the portion of the insurance data can correspond to one or more specific customer insurance policies and how the actionable event may potentially impact claims for the one or more specific customer insurance policies. The computing device can retrieve (block 1035) at least the portion of the insurance data from the cache memory and provide (block 1040) at least the portion of the insurance data to the requesting entity. According to some embodiments, the computing device can generate processed insurance data based on the data object (with or without enrichment according to the low-latency indexing and/or the semantic processing) and at least the portion of the insurance data. Accordingly, the requesting entity can search and analyze the processed insurance data in near real-time.
  • The computing device can determine (block 1045) that the actionable event is covered by the at least one customer insurance policy. In response, the computing device can generate (block 1050) a policy transaction for the at least one customer insurance policy wherein the policy transaction is based on the actionable event. In some embodiments, the policy transaction can be a pre-filled insurance form associated with a potential claim. The computing device can submit (block 1055) the policy transaction to a claim system, such as a claim system associated with an insurance provider. In some optional embodiments, the computing device can also send (block 1060) data indicative of the policy transaction to the historical data store for storage therein. As a result, the historical data store can store updated data associated with the appropriate customer insurance policy.
  • FIG. 11 illustrates an example computing device 1115 (such as the stream ingestion component 959 and/or the in memory WIP data store 953 as described with respect to FIG. 9) in which the functionalities as discussed herein may be implemented. The computing device 1115 can include a processor 1172 as well as a memory 1174. The memory 1174 can store an operating system 1176 capable of facilitating the functionalities as discussed herein as well as a set of applications 1178. For example, one of the set of applications 1178 can be the search based application 966 as described with respect to FIG. 9. The processor 1172 can interface with the memory 1174 to execute the operating system 1176 and the set of applications 1178. According to embodiments, the memory 1174 can also store data associated with insurance policies, any received telematics data or event data, and/or other data. The memory 1174 can include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), cache memory, and/or other hard drives, flash memory, MicroSD cards, and others.
  • The computing device 1115 can further include a communication module 1180 configured to communicate data via one or more networks 1110. According to some embodiments, the communication module 1180 can include one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and configured to receive and transmit data via one or more external ports 1182. For example, the communication module 1180 can receive telematics data from one or more vehicles via the network 1110 and can receive any supplemental data or relevant data associated with driving tip models from a third party entity or component. For further example, the computing device 1115 can transmit driving tips to vehicles via the communication module 1180 and the network(s) 1110. The computing device 1115 may further include a user interface 1184 configured to present information to a user and/or receive inputs from the user. As shown in FIG. 11, the user interface 1184 includes a display screen 1186 and I/O components 1188 (e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, speakers, microphones, and others). According to embodiments, the user may access the computing device 1115 via the user interface 1184 to examine ingested data, examine processed insurance claims, and/or perform other functions.
  • In general, a computer program product in accordance with an embodiment includes a computer usable storage medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code is adapted to be executed by the processor 1172 (e.g., working in connection with the operating system 1176) to facilitate the functions as described herein. In this regard, the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via C, C++, Java, Actionscript, Objective-C, Javascript, CSS, XML, and/or others).
  • Although the foregoing text sets forth a detailed description of numerous different embodiments, it should be understood that the scope of the patent is defined by the words of the claims set forth at the end of this patent. The detailed description is to be construed as exemplary only and does not describe every possible embodiment because describing every possible embodiment would be impractical, if not impossible. Numerous alternative embodiments could be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims.
  • Thus, many modifications and variations may be made in the techniques and structures described and illustrated herein without departing from the spirit and scope of the present claims. Accordingly, it should be understood that the methods and systems described herein are illustrative only and are not limiting upon the scope of the claims.
  • GLOSSARY
  • Advanced Messaging Queuing Protocol (AMQP) is an open standard protocol for messaging middleware.
  • Commodity Computing refers to components based on open standards and provided by several manufacturers with little differentiation.
  • Complex Event Processing (CEP) occurs when data from a combination of sources is assessed to determine an event.
  • Content Management System (CMS) is the store for all multi-structured data.
  • Continuous Query refers to a means of actively applying rules to data changes, often in support of Complex Event Processing (CEP).
  • Coprocessor supplements the function of the CPU in a general purpose context.
  • Direct Attached Storage (DAS) refers to a digital storage device (e.g., hard disk) that is directly connected (no network device) to a host.
  • Distributed Cache refers to both the means of caching data in transit to (write) and from (read) the database across a grid of servers, as well as the ability of such a scheme to address high-availability.
  • Distributed Operating System refers to software that manages the computing resources and provides common services where each node hosts a subset of the global aggregate operating system.
  • Globally Unique Identifier (GUID) is a global unique identifier used to identify Objects.
  • High-Availability (HA) Grid or Cluster refers to a group of computers that operate by providing reliable hosting of applications with graceful degradation and/or upgrade due to component failure or addition, respectively, but not at the expense of availability. Availability is defined as the means to submit additional processing or manage existing processing.
  • Hadoop® Distributed File System (HDFS) is a component of the Hadoop® framework that manages storage of files in a fault tolerant and distributed fashion using replicated blocks across a set of data nodes.
  • Hadoop® Yet Another Resource Manager (YARN) is a component of the Hadoop® framework that manages computing resources on the set of data nodes which are also used for computation.
  • High Performance Computing (HPC) is characterized as needing large amounts of computing power over short periods of time, often expressed with tightly coupled low latency interconnects such as the Message Passing Interface (MPI).
  • Information Retrieval refers to inverted indexing and query of multi-structured data.
  • Linux is the operating system used to manage a node and its computational and file storage resources.
  • Log-Structured Merge Tree (LSM-tree) database is a high throughput optimized datastore.
  • Low Latency refers to a network computing delay that is generally accepted as imperceptible by humans.
  • Many-Task Computing (MTC) is geared toward addressing high-performance computations comprised of multiple distinct activities integrated via a file system.
  • Master Data Management (MDM) refers to the governance and polices used to manage reference data that is key to the operation of a business.
  • Message Broker is used for enabling enterprise integration patterns used to integrate systems.
  • Multi-Structured data refers to an all-inclusive set of structured, semi-structured, and un-structured data.
  • Multi-Version Concurrency Control (MCC) is a method used by databases to implement transaction history.
  • Object Transaction refers to a unit of work for any data change to an Object attribute recorded by the database.
  • Ontology is a set of semantic metadata from which unstructured data classification is based.
  • OpenFlow enables network connectivity using a communication protocol through a switch path determined by software.
  • Software Defined Network (SDN) refers to the data flow between compute nodes in a computer network that is determined by logic implemented in software operating on server(s) separate of the network hardware.
  • Stream Processing refers to the application of messaging for the purposes of addressing parallel processing of in-flight data used for Complex Event Processing (CEP).
  • Semantic Processing refers to the ability to bring meaningful search to enterprise search engines through natural language processing and associated content classification based on ontology.

Claims (25)

What is claimed:
1. A system for processing insurance data, the system comprising:
a stream ingestion hardware component adapted to receive data relating to an actionable event and configured to generate a data object based on the received data;
a historical data store adapted to store a plurality of customer insurance policies;
an in-memory data store adapted to communicate with the stream ingestion hardware component and with the historical data store, the in-memory data store configured to:
receive the data object from the stream ingestion hardware, and
use the data object to retrieve, from the historical data store, insurance data associated with at least one of the plurality of customer insurance policies; and
a search application adapted to interface with the in-memory data store and configured to:
receive, from a requesting entity, a request to access at least a portion of the insurance data,
retrieve at least the portion of the insurance data from the in-memory data store, and
provide at least the portion of the insurance data to the requesting entity.
2. The system of claim 1, wherein the stream ingestion hardware component receives the data relating to an actionable event as an advanced message queuing protocol (AMQP) message and generates the data object as a data transfer object (DTO).
3. The system of claim 1, wherein the in-memory data store is further configured to:
employ complex event processing (CEP) to examine the data object to identify an actionable event, and wherein the in-memory data store retrieves the insurance data associated with the at least one of the plurality of customer insurance policies in response to identifying the actionable event.
4. The system of claim 1, wherein the in-memory data store includes a data cache, wherein the in-memory data store is configured to store the insurance data in the data cache.
5. The system of claim 4, wherein the stream ingestion hardware component receives the data relating to the actionable event as data related to a first notice of loss (FNOL) event.
6. The system of claim 5, further comprising a claim system adapted to interface with the in-memory data store, wherein the in-memory data store compares the retrieved insurance data to the additional data object to determine that the actionable event is covered by the at least one of the plurality of customer insurance policies, and wherein the in-memory data store is further configured to:
generate a policy transaction for the at least one of the plurality of customer insurance policies, the policy transaction based on the actionable event, and
submit the policy transaction to the claim system.
7. The system of claim 6, wherein the in-memory data store is further configured to send data indicative of the policy transaction to the historical data store, and wherein the historical data store is further configured to update the at least one of the plurality of customer insurance policies with the data indicative of the policy transaction.
8. The system of claim 1, wherein the in-memory data store is further configured to examine the data object to identify a geographical area associated with the actionable event, wherein the at least one of the plurality of customer insurance policies is retrieved based on the geographical area.
9. The system of claim 1, wherein the in-memory data store is further configured to enrich the data object using at least one of low-latency indexing and semantic processing, and wherein the search application is further configured to:
retrieve the enriched data object from the in-memory data store, and
provide the enriched data object to the requesting entity.
10. The system of claim 9, wherein the in-memory data store is further configured to generate processed insurance data based on the enriched data object and at least the portion of the insurance data.
11. A method of processing insurance data, the method comprising:
receiving, from a source, a message related to an event;
generating a data object based on the message;
examining the data object to determine that the event is an actionable event related to insurance data processing;
responsive to examining the data object, retrieving, from a historical data store using the data object, insurance data associated with at least one customer insurance policy;
storing the insurance data in a cache memory;
receiving, from a requesting entity, a request to access at least a portion of the insurance data;
responsive to receiving the request, retrieving at least the portion of the insurance data from the cache memory; and
providing at least the portion of the insurance data to the requesting entity.
12. The method of claim 11, wherein receiving the message comprises receiving an advanced message queuing protocol (AMQP) message, and wherein generating the data object comprises generating a data transfer object (DTO).
13. The method of claim 11, wherein examining the data object comprises employing complex event processing (CEP) to examine the data object.
14. The method of claim 11, wherein receiving the message related to the event comprises one of receiving a feed from a service or receiving an alert from a supporting business entity.
15. The method of claim 14, wherein receiving the message related to the event comprises receiving data relating to a first notice of loss (FNOL) event.
16. The method of claim 11, further comprising:
determining that the actionable event is covered by the at least one customer insurance policy;
generating a policy transaction for the at least one customer insurance policy, the policy transaction based on the actionable event; and
submitting the policy transaction to a claim system.
17. The method of claim 16, further comprising:
sending data indicative of the policy transaction to the historical data store, wherein the historical data store updates the at least one of the plurality of customer insurance policies with the data indicative of the policy transaction.
18. The method of claim 11, further comprising:
examining the data object to identify a geographical area associated with the actionable event, wherein the at least one of the plurality of customer insurance policies is retrieved based on the geographical area.
19. The method of claim 11, further comprising:
enriching the data object using at least one of low-latency indexing and semantic processing; and
providing the enriched data object to the requesting entity.
20. The method of claim 19, further comprising generating processed insurance data based on the enriched data object and at least the portion of the insurance data.
21. A system for processing data, the system comprising:
a stream ingestion hardware component adapted to receive data and generate a data object based on the received data;
a historical data store adapted to store historical data related to the received data;
an in-memory data store adapted to communicate with the stream ingestion hardware component and with the historical data store, the in-memory data store configured to:
receive the data object from the stream ingestion hardware, and
use the data object to retrieve, from the historical data store, at least a portion of the historical data; and
a search application adapted to interface with the in-memory data store and configured to:
receive, from a requesting entity, a request to access at least the portion of the historical data,
retrieve at least the portion of the historical data from the in-memory data store, and
provide at least the portion of the historical data to the requesting entity.
22. The system of claim 21, wherein the historical data includes customer data.
23. The system of claim 21, wherein the stream ingestion hardware component receives the data as an advanced message queuing protocol (AMQP) message and generates the data object as a data transfer object (DTO).
24. The system of claim 21, wherein the in-memory data store is further configured to enrich the data object using at least one of low-latency indexing and semantic processing, and wherein the search application is further configured to:
retrieve the enriched data object from the in-memory data store, and
provide the enriched data object to the requesting entity.
25. The system of claim 21, wherein the in-memory data store includes a data cache, wherein the in-memory data store is configured to store at least the portion of the historical data in the data cache.
US14/201,046 2013-03-15 2014-03-07 Systems And Methods Of Processing Insurance Data Using A Web-Scale Data Fabric Abandoned US20140278575A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/201,046 US20140278575A1 (en) 2013-03-15 2014-03-07 Systems And Methods Of Processing Insurance Data Using A Web-Scale Data Fabric

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361800561P 2013-03-15 2013-03-15
US14/201,046 US20140278575A1 (en) 2013-03-15 2014-03-07 Systems And Methods Of Processing Insurance Data Using A Web-Scale Data Fabric

Publications (1)

Publication Number Publication Date
US20140278575A1 true US20140278575A1 (en) 2014-09-18

Family

ID=51531949

Family Applications (8)

Application Number Title Priority Date Filing Date
US14/096,850 Abandoned US20140278573A1 (en) 2013-03-15 2013-12-04 Systems and methods for initiating insurance processing using ingested data
US14/201,325 Active US8930581B2 (en) 2013-03-15 2014-03-07 Implementation of a web-scale data fabric
US14/201,046 Abandoned US20140278575A1 (en) 2013-03-15 2014-03-07 Systems And Methods Of Processing Insurance Data Using A Web-Scale Data Fabric
US14/485,182 Active US9015238B1 (en) 2013-03-15 2014-09-12 Implementation of a web scale data fabric
US14/601,899 Active US9208240B1 (en) 2013-03-15 2015-01-21 Implementation of a web scale data fabric
US14/855,780 Active US9363322B1 (en) 2013-03-15 2015-09-16 Implementation of a web scale data fabric
US15/132,557 Active 2034-04-12 US9948715B1 (en) 2013-03-15 2016-04-19 Implementation of a web-scale data fabric
US15/954,181 Active 2034-08-18 US10715598B1 (en) 2013-03-15 2018-04-16 Implementation of a web-scale data fabric

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/096,850 Abandoned US20140278573A1 (en) 2013-03-15 2013-12-04 Systems and methods for initiating insurance processing using ingested data
US14/201,325 Active US8930581B2 (en) 2013-03-15 2014-03-07 Implementation of a web-scale data fabric

Family Applications After (5)

Application Number Title Priority Date Filing Date
US14/485,182 Active US9015238B1 (en) 2013-03-15 2014-09-12 Implementation of a web scale data fabric
US14/601,899 Active US9208240B1 (en) 2013-03-15 2015-01-21 Implementation of a web scale data fabric
US14/855,780 Active US9363322B1 (en) 2013-03-15 2015-09-16 Implementation of a web scale data fabric
US15/132,557 Active 2034-04-12 US9948715B1 (en) 2013-03-15 2016-04-19 Implementation of a web-scale data fabric
US15/954,181 Active 2034-08-18 US10715598B1 (en) 2013-03-15 2018-04-16 Implementation of a web-scale data fabric

Country Status (1)

Country Link
US (8) US20140278573A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794237A (en) * 2015-05-07 2015-07-22 中国人民大学 Web page information processing method and device
CN106789923A (en) * 2016-11-28 2017-05-31 新疆熙菱信息技术股份有限公司 The normalized system and method for DTO protocol datas
US10380518B2 (en) 2013-09-30 2019-08-13 Maximus Process tracking and defect detection
US10580079B1 (en) 2015-06-23 2020-03-03 Allstate Insurance Company Enterprise nervous system
WO2020226701A1 (en) * 2019-05-03 2020-11-12 Western Digital Technologies, Inc. Fault tolerant data coherence in large-scale distributed cache systems
CN112288585A (en) * 2020-11-20 2021-01-29 中国人寿保险股份有限公司 Insurance business actuarial data processing method and device and electronic equipment
WO2021029903A1 (en) * 2019-08-15 2021-02-18 Vouch, Inc. Rate ingestion tool
US11030697B2 (en) 2017-02-10 2021-06-08 Maximus, Inc. Secure document exchange portal system with efficient user access
CN115952200A (en) * 2023-01-17 2023-04-11 安芯网盾(北京)科技有限公司 Multi-source heterogeneous data aggregation query method and device based on MPP (maximum power point tracking) architecture
US11675706B2 (en) 2020-06-30 2023-06-13 Western Digital Technologies, Inc. Devices and methods for failure detection and recovery for a distributed cache
US11736417B2 (en) 2020-08-17 2023-08-22 Western Digital Technologies, Inc. Devices and methods for network message sequencing
US11765250B2 (en) 2020-06-26 2023-09-19 Western Digital Technologies, Inc. Devices and methods for managing network traffic for a distributed cache

Families Citing this family (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10706021B2 (en) * 2012-01-17 2020-07-07 Oracle International Corporation System and method for supporting persistence partition discovery in a distributed data grid
US10713726B1 (en) 2013-01-13 2020-07-14 United Services Automobile Association (Usaa) Determining insurance policy modifications using informatic sensor data
US20140278573A1 (en) * 2013-03-15 2014-09-18 State Farm Mutual Automobile Insurance Company Systems and methods for initiating insurance processing using ingested data
US20140307551A1 (en) * 2013-04-12 2014-10-16 Nokia Siemens Networks Oy Automatic learning of wi-fi neighbors and network characteristics
US9922101B1 (en) * 2013-06-28 2018-03-20 Emc Corporation Coordinated configuration, management, and access across multiple data stores
US9559896B2 (en) * 2013-07-08 2017-01-31 Cisco Technology, Inc. Network-assisted configuration and programming of gateways in a network environment
US9947051B1 (en) 2013-08-16 2018-04-17 United Services Automobile Association Identifying and recommending insurance policy products/services using informatic sensor data
US10719562B2 (en) * 2013-12-13 2020-07-21 BloomReach Inc. Distributed and fast data storage layer for large scale web data services
US11416941B1 (en) 2014-01-10 2022-08-16 United Services Automobile Association (Usaa) Electronic sensor management
US11087404B1 (en) 2014-01-10 2021-08-10 United Services Automobile Association (Usaa) Electronic sensor management
US10552911B1 (en) 2014-01-10 2020-02-04 United Services Automobile Association (Usaa) Determining status of building modifications using informatics sensor data
US11847666B1 (en) 2014-02-24 2023-12-19 United Services Automobile Association (Usaa) Determining status of building modifications using informatics sensor data
US10614525B1 (en) 2014-03-05 2020-04-07 United Services Automobile Association (Usaa) Utilizing credit and informatic data for insurance underwriting purposes
US20150356685A1 (en) * 2014-06-05 2015-12-10 Hartford Fire Insurance Company System and method for administering extreme weather insurance data
US10991049B1 (en) 2014-09-23 2021-04-27 United Services Automobile Association (Usaa) Systems and methods for acquiring insurance related informatics
US9928553B1 (en) 2014-10-09 2018-03-27 State Farm Mutual Automobile Insurance Company Method and system for generating real-time images of customer homes during a catastrophe
US9875509B1 (en) 2014-10-09 2018-01-23 State Farm Mutual Automobile Insurance Company Method and system for determining the condition of insured properties in a neighborhood
US9129355B1 (en) 2014-10-09 2015-09-08 State Farm Mutual Automobile Insurance Company Method and system for assessing damage to infrastructure
US10134092B1 (en) 2014-10-09 2018-11-20 State Farm Mutual Automobile Insurance Company Method and system for assessing damage to insured properties in a neighborhood
US11386107B1 (en) 2015-02-13 2022-07-12 Omnicom Media Group Holdings Inc. Variable data source dynamic and automatic ingestion and auditing platform apparatuses, methods and systems
US10430889B1 (en) 2015-02-23 2019-10-01 Allstate Insurance Company Determining an event
US10095547B1 (en) * 2015-03-13 2018-10-09 Twitter, Inc. Stream processing at scale
CN104836846B (en) * 2015-04-02 2019-08-16 国家电网公司 A kind of energy interconnection communications network architecture system based on SDN technology
US10980073B2 (en) * 2015-04-07 2021-04-13 Sharp Kabushiki Kaisha Terminal device, PGW, and TWAG
US10083551B1 (en) 2015-04-13 2018-09-25 Allstate Insurance Company Automatic crash detection
US9767625B1 (en) 2015-04-13 2017-09-19 Allstate Insurance Company Automatic crash detection
US10489863B1 (en) 2015-05-27 2019-11-26 United Services Automobile Association (Usaa) Roof inspection systems and methods
US10330826B2 (en) 2015-07-23 2019-06-25 Hartford Fire Insurance Company System for sensor enabled reporting and notification in a distributed network
US10389605B2 (en) * 2015-09-25 2019-08-20 Ncr Corporation Area based event detection and multi-factorial event characterization
US11436911B2 (en) 2015-09-30 2022-09-06 Johnson Controls Tyco IP Holdings LLP Sensor based system and method for premises safety and operational profiling based on drift analysis
US10902524B2 (en) 2015-09-30 2021-01-26 Sensormatic Electronics, LLC Sensor based system and method for augmenting underwriting of insurance policies
US11151654B2 (en) 2015-09-30 2021-10-19 Johnson Controls Tyco IP Holdings LLP System and method for determining risk profile, adjusting insurance premiums and automatically collecting premiums based on sensor data
CN105468720A (en) * 2015-11-20 2016-04-06 北京锐安科技有限公司 Method for integrating distributed data processing systems, corresponding systems and data processing method
US10244070B2 (en) * 2016-01-26 2019-03-26 Oracle International Corporation In-memory message sequencing
CN105809552A (en) * 2016-03-10 2016-07-27 深圳市前海安测信息技术有限公司 Insurance actuarial system and method based on search keywords
US10351133B1 (en) 2016-04-27 2019-07-16 State Farm Mutual Automobile Insurance Company Systems and methods for reconstruction of a vehicular crash
US10552914B2 (en) 2016-05-05 2020-02-04 Sensormatic Electronics, LLC Method and apparatus for evaluating risk based on sensor monitoring
US10810676B2 (en) * 2016-06-06 2020-10-20 Sensormatic Electronics, LLC Method and apparatus for increasing the density of data surrounding an event
JP6786892B2 (en) * 2016-06-09 2020-11-18 富士ゼロックス株式会社 Server equipment, information processing systems and programs
US10650621B1 (en) 2016-09-13 2020-05-12 Iocurrents, Inc. Interfacing with a vehicular controller area network
US11361380B2 (en) 2016-09-21 2022-06-14 Allstate Insurance Company Enhanced image capture and analysis of damaged tangible objects
US10902525B2 (en) 2016-09-21 2021-01-26 Allstate Insurance Company Enhanced image capture and analysis of damaged tangible objects
US10970786B1 (en) * 2016-11-17 2021-04-06 United Services Automobile Association (Usaa) Recommendation engine for cost of a claim
US10817334B1 (en) * 2017-03-14 2020-10-27 Twitter, Inc. Real-time analysis of data streaming objects for distributed stream processing
US10600322B2 (en) 2017-06-21 2020-03-24 International Business Machines Corporation Management of mobile objects
US10504368B2 (en) 2017-06-21 2019-12-10 International Business Machines Corporation Management of mobile objects
US11238164B2 (en) * 2017-07-10 2022-02-01 Burstiq, Inc. Secure adaptive data storage platform
US11681667B2 (en) 2017-07-30 2023-06-20 International Business Machines Corporation Persisting distributed data sets into eventually consistent storage systems
CN107819813B (en) * 2017-09-15 2020-07-28 西安科技大学 Big data comprehensive analysis and processing service system
US10686734B2 (en) * 2017-09-26 2020-06-16 Hewlett Packard Enterprise Development Lp Network switch with interconnected member nodes
CN108132982A (en) * 2017-12-13 2018-06-08 湖南中车时代通信信号有限公司 The analysis system and method for train operation monitoring device data based on big data
ES2717187A1 (en) * 2017-12-19 2019-06-19 Elortegui Josu Larrauri SYSTEM AND PRESCRIPTIVE MAINTENANCE METHOD BASED ON DATA ANALYSIS AND GENERATION OF INFORMATION THROUGH RFID TECHNOLOGY (Machine-translation by Google Translate, not legally binding)
US10789785B2 (en) * 2018-06-11 2020-09-29 Honeywell International Inc. Systems and methods for data collection from maintenance-prone vehicle components
CN108880900B (en) * 2018-07-02 2021-04-09 哈尔滨工业大学 Virtual network mapping method for network security test
US10761753B2 (en) * 2018-07-12 2020-09-01 International Business Machines Corporation Site-based coordination of distributed storage network memory
US11100918B2 (en) 2018-08-27 2021-08-24 American Family Mutual Insurance Company, S.I. Event sensing system
US11385936B1 (en) 2018-09-28 2022-07-12 Splunk Inc. Achieve search and ingest isolation via resource management in a search and indexing system
US10942774B1 (en) 2018-09-28 2021-03-09 Splunk Inc. Dynamic reassignment of search processes into workload pools in a search and indexing system
US11563640B2 (en) * 2018-12-13 2023-01-24 At&T Intellectual Property I, L.P. Network data extraction parser-model in SDN
US10805164B2 (en) * 2018-12-14 2020-10-13 At&T Intellectual Property I, L.P. Controlling parallel data processing for service function chains
US11341274B2 (en) 2018-12-19 2022-05-24 Elasticsearch B.V. Methods and systems for access controlled spaces for data analytics and visualization
CN109829015A (en) * 2019-01-16 2019-05-31 成都有据量化科技有限公司 Finance data storage method, device and storage medium based on HBase
US11477207B2 (en) * 2019-03-12 2022-10-18 Elasticsearch B.V. Configurable feature level controls for data
US11397516B2 (en) 2019-10-24 2022-07-26 Elasticsearch B.V. Systems and method for a customizable layered map for visualizing and analyzing geospatial data
US11620715B2 (en) 2020-02-18 2023-04-04 BlueOwl, LLC Systems and methods for generating insurance policies with predesignated policy levels and reimbursement controls
US11599847B2 (en) 2020-02-18 2023-03-07 BlueOwl, LLC Systems and methods for generating an inventory of personal possessions of a user for insurance purposes
US11468515B1 (en) 2020-02-18 2022-10-11 BlueOwl, LLC Systems and methods for generating and updating a value of personal possessions of a user for insurance purposes
US11861722B2 (en) 2020-02-18 2024-01-02 BlueOwl, LLC Systems and methods for generating and updating an inventory of personal possessions of a user for insurance purposes
US11710186B2 (en) 2020-04-24 2023-07-25 Allstate Insurance Company Determining geocoded region based rating systems for decisioning outputs
US11488253B1 (en) * 2020-05-26 2022-11-01 BlueOwl, LLC Systems and methods for determining personalized loss valuations for a loss event
US11651096B2 (en) 2020-08-24 2023-05-16 Burstiq, Inc. Systems and methods for accessing digital assets in a blockchain using global consent contracts
CN112685385B (en) * 2020-12-31 2021-11-16 广西中科曙光云计算有限公司 Big data platform for smart city construction
CN113867961B (en) * 2021-09-30 2022-07-22 中国矿业大学(北京) Heterogeneous GPU cluster deep learning hybrid load scheduling optimization method

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5950169A (en) * 1993-05-19 1999-09-07 Ccc Information Services, Inc. System and method for managing insurance claim processing
US20030078816A1 (en) * 2001-09-28 2003-04-24 Filep Thomas J. System and method for collaborative insurance claim processing
US20060229923A1 (en) * 2005-03-30 2006-10-12 International Business Machines Corporation Definition of workflow patterns using complex event processing
US20070100669A1 (en) * 2005-11-01 2007-05-03 Accenture Global Services Gmbh Collaborative intelligent task processor for insurance claims
US20070214023A1 (en) * 2006-03-08 2007-09-13 Guy Carpenter & Company, Inc. Spatial database system for generation of weather event and risk reports
US20070282639A1 (en) * 2005-11-21 2007-12-06 Leszuk Mary E Method and System for Enabling Automatic Insurance Claim Processing
US20080140857A1 (en) * 2006-03-21 2008-06-12 Conner Peter A Service-oriented architecture and methods for direct invocation of services utilizing a service requestor invocation framework
US20090031175A1 (en) * 2007-07-26 2009-01-29 Charu Chandra Aggarwal System and method for analyzing streams and counting stream items on multi-core processors
US20090240531A1 (en) * 2008-03-20 2009-09-24 Robert Charles Hilborn Integrated Processing System
US20090287509A1 (en) * 2008-05-16 2009-11-19 International Business Machines Corporation Method and system for automating insurance claims processing
US20100049552A1 (en) * 2008-03-14 2010-02-25 Jim Fini First Notice Of Loss reporting with integrated claim processing
US7739133B1 (en) * 2003-03-03 2010-06-15 Trover Solutions, Inc. System and method for processing insurance claims
US20100274590A1 (en) * 2009-04-24 2010-10-28 Compangano Jeffrey B Insurance administration systems and methods
US20100299162A1 (en) * 2005-09-07 2010-11-25 International Business Machines Corporation System for processing insurance coverage requests
US20110295624A1 (en) * 2010-05-25 2011-12-01 Underwriters Laboratories Inc. Insurance Policy Data Analysis and Decision Support System and Method
US8103527B1 (en) * 2007-06-29 2012-01-24 Intuit Inc. Managing insurance claim data across insurance policies
US20120143634A1 (en) * 2010-12-02 2012-06-07 American International Group, Inc. Systems, Methods, and Computer Program Products for Processing Insurance Claims
US20120311614A1 (en) * 2011-06-02 2012-12-06 Recursion Software, Inc. Architecture for pervasive software platform-based distributed knowledge network (dkn) and intelligent sensor network (isn)
US20130006608A1 (en) * 2011-06-28 2013-01-03 International Business Machines Corporation Generating Complex Event Processing Rules
US20130018936A1 (en) * 2011-07-12 2013-01-17 D Amico Nate Interacting with time-based content
US20130055060A1 (en) * 2011-08-30 2013-02-28 Sas Institute Inc. Techniques to remotely access form information
US20130226623A1 (en) * 2012-02-24 2013-08-29 Tata Consultancy Services Limited Insurance claims processing
US20130253961A1 (en) * 2009-12-22 2013-09-26 Hartford Fire Insurance Company System and method for processing data relating to component-based insurance coverage
US20140089990A1 (en) * 2011-06-08 2014-03-27 Nederlandse Organisatie Voor Toegepast- Natuurwetenschappelijk Onderzoek Tno Spatially-Segmented Content Delivery

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141686A (en) * 1998-03-13 2000-10-31 Deterministic Networks, Inc. Client-side application-classifier gathering network-traffic statistics and application and user names using extensible-service provider plugin for policy-based network control
US7188182B2 (en) * 2002-03-20 2007-03-06 Microsoft Corporation Selecting an appropriate transfer mechanism for transferring an object
US7418574B2 (en) * 2002-10-31 2008-08-26 Lockheed Martin Corporation Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction
US20060111874A1 (en) * 2004-09-30 2006-05-25 Blazant, Inx. Method and system for filtering, organizing and presenting selected information technology information as a function of business dimensions
US8458467B2 (en) * 2005-06-21 2013-06-04 Cisco Technology, Inc. Method and apparatus for adaptive application message payload content transformation in a network infrastructure element
US8266327B2 (en) * 2005-06-21 2012-09-11 Cisco Technology, Inc. Identity brokering in a network element
US8429630B2 (en) * 2005-09-15 2013-04-23 Ca, Inc. Globally distributed utility computing cloud
US7958184B2 (en) * 2008-03-04 2011-06-07 International Business Machines Corporation Network virtualization in a multi-node system with multiple networks
JP4722973B2 (en) * 2008-08-12 2011-07-13 株式会社日立製作所 Request processing method and computer system
US8805110B2 (en) * 2008-08-19 2014-08-12 Digimarc Corporation Methods and systems for content processing
US8768313B2 (en) * 2009-08-17 2014-07-01 Digimarc Corporation Methods and systems for image or audio recognition processing
US8813065B2 (en) * 2010-04-26 2014-08-19 Vmware, Inc. Microcloud platform delivery system
US8909767B2 (en) * 2010-10-13 2014-12-09 Rackware, Inc. Cloud federation in a cloud computing environment
US9231876B2 (en) * 2011-03-29 2016-01-05 Nec Europe Ltd. User traffic accountability under congestion in flow-based multi-layer switches
NZ617626A (en) * 2011-05-31 2015-09-25 Cardlink Services Ltd Addresses in financial systems
US8782395B1 (en) * 2011-09-29 2014-07-15 Riverbed Technology, Inc. Monitoring usage of WAN optimization devices integrated with content delivery networks
US8893147B2 (en) * 2012-01-13 2014-11-18 Ca, Inc. Providing a virtualized replication and high availability environment including a replication and high availability engine
US9280240B2 (en) * 2012-11-14 2016-03-08 Synaptics Incorporated System and method for finite element imaging sensor devices
US9635088B2 (en) * 2012-11-26 2017-04-25 Accenture Global Services Limited Method and system for managing user state for applications deployed on platform as a service (PaaS) clouds
US20140278573A1 (en) * 2013-03-15 2014-09-18 State Farm Mutual Automobile Insurance Company Systems and methods for initiating insurance processing using ingested data
US9674315B2 (en) * 2013-05-07 2017-06-06 Futurewei Technologies, Inc. Methods for dynamically binding header field identifiers in a network control protocol
US20140371941A1 (en) * 2013-06-18 2014-12-18 The Regents Of The University Of Colorado, A Body Corporate Software-defined energy communication networks

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5950169A (en) * 1993-05-19 1999-09-07 Ccc Information Services, Inc. System and method for managing insurance claim processing
US20030078816A1 (en) * 2001-09-28 2003-04-24 Filep Thomas J. System and method for collaborative insurance claim processing
US7739133B1 (en) * 2003-03-03 2010-06-15 Trover Solutions, Inc. System and method for processing insurance claims
US20060229923A1 (en) * 2005-03-30 2006-10-12 International Business Machines Corporation Definition of workflow patterns using complex event processing
US20100299162A1 (en) * 2005-09-07 2010-11-25 International Business Machines Corporation System for processing insurance coverage requests
US20070100669A1 (en) * 2005-11-01 2007-05-03 Accenture Global Services Gmbh Collaborative intelligent task processor for insurance claims
US20070282639A1 (en) * 2005-11-21 2007-12-06 Leszuk Mary E Method and System for Enabling Automatic Insurance Claim Processing
US20070214023A1 (en) * 2006-03-08 2007-09-13 Guy Carpenter & Company, Inc. Spatial database system for generation of weather event and risk reports
US20080140857A1 (en) * 2006-03-21 2008-06-12 Conner Peter A Service-oriented architecture and methods for direct invocation of services utilizing a service requestor invocation framework
US8103527B1 (en) * 2007-06-29 2012-01-24 Intuit Inc. Managing insurance claim data across insurance policies
US20090031175A1 (en) * 2007-07-26 2009-01-29 Charu Chandra Aggarwal System and method for analyzing streams and counting stream items on multi-core processors
US20100049552A1 (en) * 2008-03-14 2010-02-25 Jim Fini First Notice Of Loss reporting with integrated claim processing
US20090240531A1 (en) * 2008-03-20 2009-09-24 Robert Charles Hilborn Integrated Processing System
US20090287509A1 (en) * 2008-05-16 2009-11-19 International Business Machines Corporation Method and system for automating insurance claims processing
US20100274590A1 (en) * 2009-04-24 2010-10-28 Compangano Jeffrey B Insurance administration systems and methods
US20130253961A1 (en) * 2009-12-22 2013-09-26 Hartford Fire Insurance Company System and method for processing data relating to component-based insurance coverage
US20110295624A1 (en) * 2010-05-25 2011-12-01 Underwriters Laboratories Inc. Insurance Policy Data Analysis and Decision Support System and Method
US20120143634A1 (en) * 2010-12-02 2012-06-07 American International Group, Inc. Systems, Methods, and Computer Program Products for Processing Insurance Claims
US20120311614A1 (en) * 2011-06-02 2012-12-06 Recursion Software, Inc. Architecture for pervasive software platform-based distributed knowledge network (dkn) and intelligent sensor network (isn)
US20140089990A1 (en) * 2011-06-08 2014-03-27 Nederlandse Organisatie Voor Toegepast- Natuurwetenschappelijk Onderzoek Tno Spatially-Segmented Content Delivery
US20130006608A1 (en) * 2011-06-28 2013-01-03 International Business Machines Corporation Generating Complex Event Processing Rules
US20130018936A1 (en) * 2011-07-12 2013-01-17 D Amico Nate Interacting with time-based content
US20130055060A1 (en) * 2011-08-30 2013-02-28 Sas Institute Inc. Techniques to remotely access form information
US20130226623A1 (en) * 2012-02-24 2013-08-29 Tata Consultancy Services Limited Insurance claims processing

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Aiyagari, Sanjay et al. AMQP Advanced Message Queuing Protocol Protocol Specification. Version 09 December 2006. https://www.rabbitmq.com/resources/specs/amqp0-9.pdf *
AMQP is the Internet Protocol for Business Messaging Website. 04 July 2011. https://web.archive.org/web/20110704212632/http://www.amqp.org/about/what *
Deerwester et al. Indexing by Latent Semantic Analysis. Journal of the American Society for Informaiton Science. 1990. 41 (6). pp 391-407. *
Fong et al. Toward a scale-out data-management middleware for low-latency enterprise computing. IBM J. RES. & DEV. VOL. 57 NO. 3/4 PAPER 6 MAY/JULY 2013. *
Graves, Steven. 101: An Introduction to In-Memory Database Systems. 05 January 2012. http://www.low-latency.com/article/101-introduction-memory-database-systems *
NYSE Technologies Website and Fact Sheet for Data Fabric 6.0 August 2011. https://web.archive.org/web/20110823124532/http://nysetechnologies.nyx.com/data-technology/data-fabric-6-0 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10380518B2 (en) 2013-09-30 2019-08-13 Maximus Process tracking and defect detection
CN104794237A (en) * 2015-05-07 2015-07-22 中国人民大学 Web page information processing method and device
US10580079B1 (en) 2015-06-23 2020-03-03 Allstate Insurance Company Enterprise nervous system
CN106789923A (en) * 2016-11-28 2017-05-31 新疆熙菱信息技术股份有限公司 The normalized system and method for DTO protocol datas
US11030697B2 (en) 2017-02-10 2021-06-08 Maximus, Inc. Secure document exchange portal system with efficient user access
US11360899B2 (en) 2019-05-03 2022-06-14 Western Digital Technologies, Inc. Fault tolerant data coherence in large-scale distributed cache systems
WO2020226701A1 (en) * 2019-05-03 2020-11-12 Western Digital Technologies, Inc. Fault tolerant data coherence in large-scale distributed cache systems
US11656992B2 (en) 2019-05-03 2023-05-23 Western Digital Technologies, Inc. Distributed cache with in-network prefetch
WO2021029903A1 (en) * 2019-08-15 2021-02-18 Vouch, Inc. Rate ingestion tool
EP4014135A4 (en) * 2019-08-15 2023-08-09 Vouch, Inc. Rate ingestion tool
US11765250B2 (en) 2020-06-26 2023-09-19 Western Digital Technologies, Inc. Devices and methods for managing network traffic for a distributed cache
US11675706B2 (en) 2020-06-30 2023-06-13 Western Digital Technologies, Inc. Devices and methods for failure detection and recovery for a distributed cache
US11736417B2 (en) 2020-08-17 2023-08-22 Western Digital Technologies, Inc. Devices and methods for network message sequencing
CN112288585A (en) * 2020-11-20 2021-01-29 中国人寿保险股份有限公司 Insurance business actuarial data processing method and device and electronic equipment
CN115952200A (en) * 2023-01-17 2023-04-11 安芯网盾(北京)科技有限公司 Multi-source heterogeneous data aggregation query method and device based on MPP (maximum power point tracking) architecture

Also Published As

Publication number Publication date
US20140280457A1 (en) 2014-09-18
US20140278573A1 (en) 2014-09-18
US8930581B2 (en) 2015-01-06
US9015238B1 (en) 2015-04-21
US9948715B1 (en) 2018-04-17
US9208240B1 (en) 2015-12-08
US9363322B1 (en) 2016-06-07
US10715598B1 (en) 2020-07-14

Similar Documents

Publication Publication Date Title
US10715598B1 (en) Implementation of a web-scale data fabric
US20200394455A1 (en) Data analytics engine for dynamic network-based resource-sharing
Rao et al. The big data system, components, tools, and technologies: a survey
US20230169086A1 (en) Event driven extract, transform, load (etl) processing
US11503107B2 (en) Integrating logic in micro batch based event processing systems
Chen et al. Big data: A survey
US10754877B2 (en) System and method for providing big data analytics on dynamically-changing data models
Padgavankar et al. Big data storage and challenges
Begoli et al. Design principles for effective knowledge discovery from big data
US8972337B1 (en) Efficient query processing in columnar databases using bloom filters
US9158843B1 (en) Addressing mechanism for data at world wide scale
US11681651B1 (en) Lineage data for data records
JP2016532199A (en) Generation of multi-column index of relational database by data bit interleaving for selectivity
US8364651B2 (en) Apparatus, system, and method for identifying redundancy and consolidation opportunities in databases and application systems
Voit et al. Big data processing for full-text search and visualization with Elasticsearch
US20220083507A1 (en) Trust chain for official data and documents
US9323812B2 (en) Hybrid bifurcation of intersection nodes
Taori et al. Big Data Management
Wong et al. Everything a Data Scientist Should Know About Data Management
US11947558B2 (en) Built-in analytics for database management
US20230297550A1 (en) Dynamic data views
Santhiya et al. Big data insight: data management technologies, applications and challenges
Ahmad et al. Big Data Manipulation-A new concern to the ICT world (A massive Survey/statistics along with the necessity)
Zhu et al. A technical evaluation of Neo4j and elasticsearch for mining twitter data
Küsek et al. Project-based application on big data usage

Legal Events

Date Code Title Description
AS Assignment

Owner name: COBI SYSTEMS, LLC, COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CALVO, LYNN;KANNEGANTI, V. RAO;SIGNING DATES FROM 20140218 TO 20140225;REEL/FRAME:032413/0519

Owner name: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY, IL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ANTON, ALEX;SANIDAS, TIM G.;PERSCHALL, JEFF;AND OTHERS;REEL/FRAME:032380/0904

Effective date: 20140218

Owner name: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY, IL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:COBI SYSTEMS, LLC;REEL/FRAME:032381/0185

Effective date: 20140304

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION