The Data Resource Center (DRC) will produce the CFDE Workbench which will be composed of two main products: the CFDE information portal, and the CFDE data resource portal. These two web portals will be full-stack web-based applications with a backend database and will be integrated into one public site. It will contain information about the CFDE in a dedicated About page, information about each participating and non-participating CF program, information about each data coordination center (DCC), a link to a catalog of CF datasets, and a link to a catalog of CF tools and workflows, news, events, funding opportunities, standards and protocols, educational programs and opportunities, social media feeds, and publications. The CFDE data resource portal will contain metadata, data, workflows, and tools which are the products of the CF programs, and their data coordination centers (DDCs). We will adopt the C2M2 data model for storing information about metadata describing DCC datasets. We will also archive relatively small omics datasets that do not have a home in widely established repositories and do not require PHI protection. In addition, we will expand the cataloging to CF tools, APIs, and workflows. Importantly, we will develop a search engine that will index and present results from all these assembled digital assets. In addition, continuing the work established in the CFDE pilot phase, users of the data portal will be able to fetch identified datasets through links provided by the DCCs via the DRS protocol. This will include links to raw and processed data. The CFDE portals will provide access to CF programs processed data in various formats including: 1) knowledge graph assertions; 2) gene, drug, metabolite, and other set libraries; 3) data matrices ready for machine learning and other AI applications; 4) signatures; and 5) bipartite graphs. In addition, the extract, transform, and load (ETL) scripts to process the data into these formats will be provided. Since such processed data is relatively small, we will archive and serve this processed data, mint it with unique IDs, and serve it via APIs. In addition, we will develop workflows that will demonstrate how the processed data can be harmonized. At the same time, we will document APIs from all CF DCCs and provide example Jupyter Notebooks that demonstrate how these datasets can be accessed, processed, and combined for integrative omics analysis. For the portals we will also develop a library of tools that utilize these processed datasets. These tools will have some uniform requirements enabling a plug-and-play architecture. To achieve these goals, we will work collaboratively with the other CFDE newly established centers, the participating CFDE DCCs, the CFDE NIH team, and relevant external entities and potential consumers of these three software products. These interactions will be achieved via face-to-face meetings, virtual working groups meeting, one-on-one meetings, Slack, GitHub, project management software, and e-mail exchange. Via these interactions, we will establish standards, workstreams, feedback and mini projects towards accomplishing the goal of developing a lively and productive Common Fund Data Ecosystem.