update nav

This commit is contained in:
zachary62
2025-04-04 14:15:36 -04:00
parent c41c55499d
commit 93df0fecc2
37 changed files with 261 additions and 2 deletions

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Graph & StateGraph"
parent: "LangGraph"
nav_order: 1
---
# Chapter 1: Graph / StateGraph - The Blueprint of Your Application # Chapter 1: Graph / StateGraph - The Blueprint of Your Application
Welcome to the LangGraph tutorial! We're excited to help you learn how to build powerful, stateful applications with Large Language Models (LLMs). Welcome to the LangGraph tutorial! We're excited to help you learn how to build powerful, stateful applications with Large Language Models (LLMs).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Nodes (PregelNode)"
parent: "LangGraph"
nav_order: 2
---
# Chapter 2: Nodes (`PregelNode`) - The Workers of Your Graph # Chapter 2: Nodes (`PregelNode`) - The Workers of Your Graph
In [Chapter 1: Graph / StateGraph](01_graph___stategraph.md), we learned how `StateGraph` acts as a blueprint or a flowchart for our application. It defines the overall structure and the shared "whiteboard" (the State) that holds information. In [Chapter 1: Graph / StateGraph](01_graph___stategraph.md), we learned how `StateGraph` acts as a blueprint or a flowchart for our application. It defines the overall structure and the shared "whiteboard" (the State) that holds information.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Channels"
parent: "LangGraph"
nav_order: 3
---
# Chapter 3: Channels - The Communication System # Chapter 3: Channels - The Communication System
In [Chapter 1: Graph / StateGraph](01_graph___stategraph.md), we learned about the `StateGraph` as the blueprint for our application, holding the shared "whiteboard" or state. In [Chapter 2: Nodes (`PregelNode`)](02_nodes___pregelnode__.md), we met the "workers" or Nodes that perform tasks and read/write to this whiteboard. In [Chapter 1: Graph / StateGraph](01_graph___stategraph.md), we learned about the `StateGraph` as the blueprint for our application, holding the shared "whiteboard" or state. In [Chapter 2: Nodes (`PregelNode`)](02_nodes___pregelnode__.md), we met the "workers" or Nodes that perform tasks and read/write to this whiteboard.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Control Flow Primitives"
parent: "LangGraph"
nav_order: 4
---
# Chapter 4: Control Flow Primitives (`Branch`, `Send`, `Interrupt`) # Chapter 4: Control Flow Primitives (`Branch`, `Send`, `Interrupt`)
In [Chapter 3: Channels](03_channels.md), we saw how information is stored and updated in our graph's shared state using Channels. We have the blueprint ([`StateGraph`](01_graph___stategraph.md)), the workers ([`Nodes`](02_nodes___pregelnode__.md)), and the communication system ([Channels](03_channels.md)). In [Chapter 3: Channels](03_channels.md), we saw how information is stored and updated in our graph's shared state using Channels. We have the blueprint ([`StateGraph`](01_graph___stategraph.md)), the workers ([`Nodes`](02_nodes___pregelnode__.md)), and the communication system ([Channels](03_channels.md)).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Pregel Execution Engine"
parent: "LangGraph"
nav_order: 5
---
# Chapter 5: Pregel Execution Engine - The Engine Room # Chapter 5: Pregel Execution Engine - The Engine Room
In the previous chapters, we learned how to build the blueprint of our application using [`StateGraph`](01_graph___stategraph.md), define the workers with [`Nodes`](02_nodes___pregelnode__.md), manage the shared state with [`Channels`](03_channels.md), and direct the traffic using [Control Flow Primitives](04_control_flow_primitives___branch____send____interrupt__.md). In the previous chapters, we learned how to build the blueprint of our application using [`StateGraph`](01_graph___stategraph.md), define the workers with [`Nodes`](02_nodes___pregelnode__.md), manage the shared state with [`Channels`](03_channels.md), and direct the traffic using [Control Flow Primitives](04_control_flow_primitives___branch____send____interrupt__.md).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Checkpointer (BaseCheckpointSaver)"
parent: "LangGraph"
nav_order: 6
---
# Chapter 6: Checkpointer (`BaseCheckpointSaver`) - Saving Your Progress # Chapter 6: Checkpointer (`BaseCheckpointSaver`) - Saving Your Progress
In [Chapter 5: Pregel Execution Engine](05_pregel_execution_engine.md), we saw how the engine runs our graph step-by-step. But what happens if a graph takes hours to run, or if it needs to pause and wait for a human? If the program crashes or we need to stop it, do we lose all the progress? In [Chapter 5: Pregel Execution Engine](05_pregel_execution_engine.md), we saw how the engine runs our graph step-by-step. But what happens if a graph takes hours to run, or if it needs to pause and wait for a human? If the program crashes or we need to stop it, do we lose all the progress?

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Table, SSTable & TableCache"
parent: "LevelDB"
nav_order: 1
---
# Chapter 1: Table / SSTable & TableCache # Chapter 1: Table / SSTable & TableCache
Welcome to your LevelDB journey! This is the first chapter where we'll start exploring the fundamental building blocks of LevelDB. Welcome to your LevelDB journey! This is the first chapter where we'll start exploring the fundamental building blocks of LevelDB.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "MemTable"
parent: "LevelDB"
nav_order: 2
---
# Chapter 2: MemTable # Chapter 2: MemTable
In [Chapter 1: Table / SSTable & TableCache](01_table___sstable___tablecache.md), we learned how LevelDB stores the bulk of its data permanently on disk in sorted, immutable files called SSTables. We also saw how the `TableCache` helps access these files efficiently. In [Chapter 1: Table / SSTable & TableCache](01_table___sstable___tablecache.md), we learned how LevelDB stores the bulk of its data permanently on disk in sorted, immutable files called SSTables. We also saw how the `TableCache` helps access these files efficiently.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Write-Ahead Log (WAL)"
parent: "LevelDB"
nav_order: 3
---
# Chapter 3: Write-Ahead Log (WAL) & LogWriter/LogReader # Chapter 3: Write-Ahead Log (WAL) & LogWriter/LogReader
In [Chapter 2: MemTable](02_memtable.md), we saw how LevelDB uses an in-memory `MemTable` (like a fast notepad) to quickly accept new writes (`Put` or `Delete`) before they are eventually flushed to an [SSTable](01_table___sstable___tablecache.md) file on disk. In [Chapter 2: MemTable](02_memtable.md), we saw how LevelDB uses an in-memory `MemTable` (like a fast notepad) to quickly accept new writes (`Put` or `Delete`) before they are eventually flushed to an [SSTable](01_table___sstable___tablecache.md) file on disk.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "DBImpl"
parent: "LevelDB"
nav_order: 4
---
# Chapter 4: DBImpl - The Database General Manager # Chapter 4: DBImpl - The Database General Manager
In the previous chapters, we've explored some key ingredients of LevelDB: In the previous chapters, we've explored some key ingredients of LevelDB:

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "WriteBatch"
parent: "LevelDB"
nav_order: 5
---
# Chapter 5: WriteBatch - Grouping Changes Together # Chapter 5: WriteBatch - Grouping Changes Together
Welcome back! In [Chapter 4: DBImpl](04_dbimpl.md), we saw how `DBImpl` acts as the general manager, coordinating writes, reads, and background tasks. We learned that when you call `Put` or `Delete`, `DBImpl` handles writing to the [Write-Ahead Log (WAL)](03_write_ahead_log__wal____logwriter_logreader.md) and then updating the [MemTable](02_memtable.md). Welcome back! In [Chapter 4: DBImpl](04_dbimpl.md), we saw how `DBImpl` acts as the general manager, coordinating writes, reads, and background tasks. We learned that when you call `Put` or `Delete`, `DBImpl` handles writing to the [Write-Ahead Log (WAL)](03_write_ahead_log__wal____logwriter_logreader.md) and then updating the [MemTable](02_memtable.md).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Version & VersionSet"
parent: "LevelDB"
nav_order: 6
---
# Chapter 6: Version & VersionSet - The Database Catalog # Chapter 6: Version & VersionSet - The Database Catalog
In the previous chapter, [Chapter 5: WriteBatch](05_writebatch.md), we learned how LevelDB groups multiple `Put` and `Delete` operations together to apply them atomically and efficiently. We saw that writes go first to the [Write-Ahead Log (WAL)](03_write_ahead_log__wal____logwriter_logreader.md) for durability, and then to the in-memory [MemTable](02_memtable.md). In the previous chapter, [Chapter 5: WriteBatch](05_writebatch.md), we learned how LevelDB groups multiple `Put` and `Delete` operations together to apply them atomically and efficiently. We saw that writes go first to the [Write-Ahead Log (WAL)](03_write_ahead_log__wal____logwriter_logreader.md) for durability, and then to the in-memory [MemTable](02_memtable.md).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Iterator"
parent: "LevelDB"
nav_order: 7
---
# Chapter 7: Iterator - Your Guide Through the Database # Chapter 7: Iterator - Your Guide Through the Database
Welcome back! In [Chapter 6: Version & VersionSet](06_version___versionset.md), we learned how LevelDB keeps track of all the live SSTable files using `Version` objects and the `VersionSet`. This catalog helps LevelDB efficiently find a single key by looking first in the [MemTable](02_memtable.md) and then pinpointing the right [SSTables](01_table___sstable___tablecache.md) to check. Welcome back! In [Chapter 6: Version & VersionSet](06_version___versionset.md), we learned how LevelDB keeps track of all the live SSTable files using `Version` objects and the `VersionSet`. This catalog helps LevelDB efficiently find a single key by looking first in the [MemTable](02_memtable.md) and then pinpointing the right [SSTables](01_table___sstable___tablecache.md) to check.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Compaction"
parent: "LevelDB"
nav_order: 8
---
# Chapter 8: Compaction - Keeping the Library Tidy # Chapter 8: Compaction - Keeping the Library Tidy
In [Chapter 7: Iterator](07_iterator.md), we saw how LevelDB provides iterators to give us a unified, sorted view of our data, cleverly merging information from the in-memory [MemTable](02_memtable.md) and the various [SSTable](01_table___sstable___tablecache.md) files on disk. In [Chapter 7: Iterator](07_iterator.md), we saw how LevelDB provides iterators to give us a unified, sorted view of our data, cleverly merging information from the in-memory [MemTable](02_memtable.md) and the various [SSTable](01_table___sstable___tablecache.md) files on disk.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "InternalKey & DBFormat"
parent: "LevelDB"
nav_order: 9
---
# Chapter 9: InternalKey & DBFormat - LevelDB's Internal Bookkeeping # Chapter 9: InternalKey & DBFormat - LevelDB's Internal Bookkeeping
Welcome to the final chapter of our deep dive into LevelDB's core components! In [Chapter 8: Compaction](08_compaction.md), we saw how LevelDB keeps its storage tidy by merging and rewriting [SSTables](01_table___sstable___tablecache.md) in the background. This compaction process relies heavily on being able to correctly compare different versions of the same key and discard old or deleted data. Welcome to the final chapter of our deep dive into LevelDB's core components! In [Chapter 8: Compaction](08_compaction.md), we saw how LevelDB keeps its storage tidy by merging and rewriting [SSTables](01_table___sstable___tablecache.md) in the background. This compaction process relies heavily on being able to correctly compare different versions of the same key and discard old or deleted data.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "CLI (mcp command)"
parent: "MCP Python SDK"
nav_order: 1
---
# Chapter 1: Your Control Panel - The `mcp` Command-Line Interface # Chapter 1: Your Control Panel - The `mcp` Command-Line Interface
Welcome to the MCP Python SDK! This is your starting point for building powerful, interactive AI tools. Welcome to the MCP Python SDK! This is your starting point for building powerful, interactive AI tools.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "FastMCP Server (FastMCP)"
parent: "MCP Python SDK"
nav_order: 2
---
# Chapter 2: Easier Server Building with `FastMCP` # Chapter 2: Easier Server Building with `FastMCP`
In [Chapter 1: Your Control Panel - The `mcp` Command-Line Interface](01_cli___mcp__command_.md), we learned how to use the `mcp` command to run, test, and install MCP servers. We even saw a tiny example of a server file. But how do we *build* that server code without getting lost in complex details? In [Chapter 1: Your Control Panel - The `mcp` Command-Line Interface](01_cli___mcp__command_.md), we learned how to use the `mcp` command to run, test, and install MCP servers. We even saw a tiny example of a server file. But how do we *build* that server code without getting lost in complex details?

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "FastMCP Resources (Resource, ResourceManager)"
parent: "MCP Python SDK"
nav_order: 3
---
# Chapter 3: Sharing Data - FastMCP Resources (`Resource`, `ResourceManager`) # Chapter 3: Sharing Data - FastMCP Resources (`Resource`, `ResourceManager`)
In [Chapter 2: Easier Server Building with `FastMCP`](02_fastmcp_server___fastmcp__.md), we saw how `FastMCP` and the `@server.tool()` decorator make it easy to create servers that can *perform actions* for clients, like our `echo` tool. In [Chapter 2: Easier Server Building with `FastMCP`](02_fastmcp_server___fastmcp__.md), we saw how `FastMCP` and the `@server.tool()` decorator make it easy to create servers that can *perform actions* for clients, like our `echo` tool.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "FastMCP Tools (Tool, ToolManager)"
parent: "MCP Python SDK"
nav_order: 4
---
# Chapter 4: FastMCP Tools (`Tool`, `ToolManager`) # Chapter 4: FastMCP Tools (`Tool`, `ToolManager`)
In [Chapter 3: Sharing Data - FastMCP Resources (`Resource`, `ResourceManager`)](03_fastmcp_resources___resource____resourcemanager__.md), we learned how to make data available for clients to read using `Resource` objects, like putting books in a digital library. That's great for sharing information, but what if we want the client to be able to ask the server to *do* something? In [Chapter 3: Sharing Data - FastMCP Resources (`Resource`, `ResourceManager`)](03_fastmcp_resources___resource____resourcemanager__.md), we learned how to make data available for clients to read using `Resource` objects, like putting books in a digital library. That's great for sharing information, but what if we want the client to be able to ask the server to *do* something?

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "FastMCP Prompts (Prompt, PromptManager)"
parent: "MCP Python SDK"
nav_order: 5
---
# Chapter 5: Reusable Chat Starters - FastMCP Prompts (`Prompt`, `PromptManager`) # Chapter 5: Reusable Chat Starters - FastMCP Prompts (`Prompt`, `PromptManager`)
In [Chapter 4: FastMCP Tools (`Tool`, `ToolManager`)](04_fastmcp_tools___tool____toolmanager__.md), we learned how to give our server specific *actions* it can perform, like a calculator tool. But modern AI often involves conversations, especially with Large Language Models (LLMs). How do we manage the instructions and conversation starters we send to these models? In [Chapter 4: FastMCP Tools (`Tool`, `ToolManager`)](04_fastmcp_tools___tool____toolmanager__.md), we learned how to give our server specific *actions* it can perform, like a calculator tool. But modern AI often involves conversations, especially with Large Language Models (LLMs). How do we manage the instructions and conversation starters we send to these models?

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "FastMCP Context (Context)"
parent: "MCP Python SDK"
nav_order: 6
---
# Chapter 6: Talking Back - FastMCP Context (`Context`) # Chapter 6: Talking Back - FastMCP Context (`Context`)
In [Chapter 5: Reusable Chat Starters - FastMCP Prompts (`Prompt`, `PromptManager`)](05_fastmcp_prompts___prompt____promptmanager__.md), we learned how to create reusable message templates for interacting with AI models. We've seen how to build servers with data resources ([Chapter 3](03_fastmcp_resources___resource____resourcemanager__.md)) and action tools ([Chapter 4](04_fastmcp_tools___tool____toolmanager__.md)). In [Chapter 5: Reusable Chat Starters - FastMCP Prompts (`Prompt`, `PromptManager`)](05_fastmcp_prompts___prompt____promptmanager__.md), we learned how to create reusable message templates for interacting with AI models. We've seen how to build servers with data resources ([Chapter 3](03_fastmcp_resources___resource____resourcemanager__.md)) and action tools ([Chapter 4](04_fastmcp_tools___tool____toolmanager__.md)).
@@ -153,7 +160,7 @@ if __name__ == "__main__":
1. **`@server.resource(...)`**: We added a simple resource named `config://task_settings` that just returns a string. 1. **`@server.resource(...)`**: We added a simple resource named `config://task_settings` that just returns a string.
2. **`resource_contents = await ctx.read_resource("config://task_settings")`**: Inside our `run_long_task` tool, we now use `ctx.read_resource()` to fetch the content of our configuration resource. This allows the tool to dynamically access data managed by the server without having direct access to the resource's implementation function (`get_task_settings`). 2. **`resource_contents = await ctx.read_resource("config://task_settings")`**: Inside our `run_long_task` tool, we now use `ctx.read_resource()` to fetch the content of our configuration resource. This allows the tool to dynamically access data managed by the server without having direct access to the resource's implementation function (`get_task_settings`).
3. **Processing Content**: The `read_resource` method returns an iterable of `ReadResourceContents` objects (often just one). We extract the string content to use it. 3. **Processing Content**: The `read_resource` method returns an iterable of `ReadResourceContents` objects (often just one). We extracted the string content to use it.
Now, our tool can both communicate outwards (logs, progress) and interact inwards (read resources) using the same `Context` object, all within the scope of the single request it's handling. Now, our tool can both communicate outwards (logs, progress) and interact inwards (read resources) using the same `Context` object, all within the scope of the single request it's handling.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "MCP Protocol Types"
parent: "MCP Python SDK"
nav_order: 7
---
# Chapter 7: MCP Protocol Types - The Standard Language # Chapter 7: MCP Protocol Types - The Standard Language
In the previous chapter, [Chapter 6: Talking Back - FastMCP Context (`Context`)](06_fastmcp_context___context__.md), we saw how the `Context` object gives our tools and resources a "backstage pass" to send logs, report progress, and access other server features during a request. We've built up a good understanding of how `FastMCP` helps us create powerful servers with tools ([Chapter 4](04_fastmcp_tools___tool____toolmanager__.md)), resources ([Chapter 3](03_fastmcp_resources___resource____resourcemanager__.md)), and prompts ([Chapter 5](05_fastmcp_prompts___prompt____promptmanager__.md)). In the previous chapter, [Chapter 6: Talking Back - FastMCP Context (`Context`)](06_fastmcp_context___context__.md), we saw how the `Context` object gives our tools and resources a "backstage pass" to send logs, report progress, and access other server features during a request. We've built up a good understanding of how `FastMCP` helps us create powerful servers with tools ([Chapter 4](04_fastmcp_tools___tool____toolmanager__.md)), resources ([Chapter 3](03_fastmcp_resources___resource____resourcemanager__.md)), and prompts ([Chapter 5](05_fastmcp_prompts___prompt____promptmanager__.md)).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Client/Server Sessions (ClientSession, ServerSession)"
parent: "MCP Python SDK"
nav_order: 8
---
# Chapter 8: Client/Server Sessions (`ClientSession`, `ServerSession`) # Chapter 8: Client/Server Sessions (`ClientSession`, `ServerSession`)
Welcome back! In [Chapter 7: MCP Protocol Types](07_mcp_protocol_types.md), we learned about the standardized "digital forms" the Pydantic models that define the structure of messages exchanged between an MCP client and server. We saw examples like `CallToolRequest` and `ProgressNotification`. Welcome back! In [Chapter 7: MCP Protocol Types](07_mcp_protocol_types.md), we learned about the standardized "digital forms" the Pydantic models that define the structure of messages exchanged between an MCP client and server. We saw examples like `CallToolRequest` and `ProgressNotification`.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Communication Transports"
parent: "MCP Python SDK"
nav_order: 9
---
# Chapter 9: Communication Transports (Stdio, SSE, WebSocket, Memory) # Chapter 9: Communication Transports (Stdio, SSE, WebSocket, Memory)
Welcome to the final chapter of our introductory journey into the `MCP Python SDK`! In [Chapter 8: Client/Server Sessions (`ClientSession`, `ServerSession`)](08_client_server_sessions___clientsession____serversession__.md), we learned how `Session` objects manage the ongoing conversation and state for a single connection between a client and a server, like dedicated phone operators handling a call. Welcome to the final chapter of our introductory journey into the `MCP Python SDK`! In [Chapter 8: Client/Server Sessions (`ClientSession`, `ServerSession`)](08_client_server_sessions___clientsession____serversession__.md), we learned how `Session` objects manage the ongoing conversation and state for a single connection between a client and a server, like dedicated phone operators handling a call.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "ndarray (N-dimensional array)"
parent: "NumPy Core"
nav_order: 1
---
# Chapter 1: ndarray (N-dimensional array) # Chapter 1: ndarray (N-dimensional array)
Welcome to the NumPy Core tutorial! If you're interested in how NumPy works under the hood, you're in the right place. NumPy is the foundation for scientific computing in Python, and its core strength comes from a special object called the `ndarray`. Welcome to the NumPy Core tutorial! If you're interested in how NumPy works under the hood, you're in the right place. NumPy is the foundation for scientific computing in Python, and its core strength comes from a special object called the `ndarray`.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "dtype (data type object)"
parent: "NumPy Core"
nav_order: 2
---
# Chapter 2: dtype (Data Type Object) # Chapter 2: dtype (Data Type Object)
In [Chapter 1: ndarray (N-dimensional array)](01_ndarray__n_dimensional_array_.md), we learned that NumPy's `ndarray` is a powerful grid designed to hold items **of the same type**. This "same type" requirement is fundamental to NumPy's speed and efficiency. But how does NumPy know *what kind* of data it's storing? That's where the `dtype` comes in! In [Chapter 1: ndarray (N-dimensional array)](01_ndarray__n_dimensional_array_.md), we learned that NumPy's `ndarray` is a powerful grid designed to hold items **of the same type**. This "same type" requirement is fundamental to NumPy's speed and efficiency. But how does NumPy know *what kind* of data it's storing? That's where the `dtype` comes in!

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "ufunc (universal function)"
parent: "NumPy Core"
nav_order: 3
---
# Chapter 3: ufunc (Universal Function) # Chapter 3: ufunc (Universal Function)
Welcome back! In [Chapter 1: ndarray (N-dimensional array)](01_ndarray__n_dimensional_array_.md), we met the `ndarray`, NumPy's powerful container for numerical data. In [Chapter 2: dtype (Data Type Object)](02_dtype__data_type_object_.md), we learned how `dtype`s specify the exact *kind* of data stored within those arrays. Welcome back! In [Chapter 1: ndarray (N-dimensional array)](01_ndarray__n_dimensional_array_.md), we met the `ndarray`, NumPy's powerful container for numerical data. In [Chapter 2: dtype (Data Type Object)](02_dtype__data_type_object_.md), we learned how `dtype`s specify the exact *kind* of data stored within those arrays.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Numeric Types (numerictypes)"
parent: "NumPy Core"
nav_order: 4
---
# Chapter 4: Numeric Types (`numerictypes`) # Chapter 4: Numeric Types (`numerictypes`)
Hello again! In [Chapter 3: ufunc (Universal Function)](03_ufunc__universal_function_.md), we saw how NumPy uses universal functions (`ufuncs`) to perform fast calculations on arrays. We learned that these `ufuncs` operate element by element and can handle different data types using optimized C loops. Hello again! In [Chapter 3: ufunc (Universal Function)](03_ufunc__universal_function_.md), we saw how NumPy uses universal functions (`ufuncs`) to perform fast calculations on arrays. We learned that these `ufuncs` operate element by element and can handle different data types using optimized C loops.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Array Printing (arrayprint)"
parent: "NumPy Core"
nav_order: 5
---
# Chapter 5: Array Printing (`arrayprint`) # Chapter 5: Array Printing (`arrayprint`)
In the previous chapter, [Chapter 4: Numeric Types (`numerictypes`)](04_numeric_types___numerictypes__.md), we explored the different kinds of data NumPy can store in its arrays, like `int32`, `float64`, and more. Now that we know about the arrays ([`ndarray`](01_ndarray__n_dimensional_array_.md)), their data types ([`dtype`](02_dtype__data_type_object_.md)), the functions that operate on them ([`ufunc`](03_ufunc__universal_function_.md)), and the specific number types (`numerictypes`), a practical question arises: How do we actually *look* at these arrays, especially if they are very large? In the previous chapter, [Chapter 4: Numeric Types (`numerictypes`)](04_numeric_types___numerictypes__.md), we explored the different kinds of data NumPy can store in its arrays, like `int32`, `float64`, and more. Now that we know about the arrays ([`ndarray`](01_ndarray__n_dimensional_array_.md)), their data types ([`dtype`](02_dtype__data_type_object_.md)), the functions that operate on them ([`ufunc`](03_ufunc__universal_function_.md)), and the specific number types (`numerictypes`), a practical question arises: How do we actually *look* at these arrays, especially if they are very large?

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Multiarray Module"
parent: "NumPy Core"
nav_order: 6
---
# Chapter 6: multiarray Module # Chapter 6: multiarray Module
Welcome back! In [Chapter 5: Array Printing (`arrayprint`)](05_array_printing___arrayprint__.md), we saw how NumPy takes complex arrays and presents them in a readable format. We've now covered the array container ([`ndarray`](01_ndarray__n_dimensional_array_.md)), its data types ([`dtype`](02_dtype__data_type_object_.md)), the functions that compute on them ([`ufunc`](03_ufunc__universal_function_.md)), the catalog of types ([`numerictypes`](04_numeric_types___numerictypes__.md)), and how arrays are displayed ([`arrayprint`](05_array_printing___arrayprint__.md)). Welcome back! In [Chapter 5: Array Printing (`arrayprint`)](05_array_printing___arrayprint__.md), we saw how NumPy takes complex arrays and presents them in a readable format. We've now covered the array container ([`ndarray`](01_ndarray__n_dimensional_array_.md)), its data types ([`dtype`](02_dtype__data_type_object_.md)), the functions that compute on them ([`ufunc`](03_ufunc__universal_function_.md)), the catalog of types ([`numerictypes`](04_numeric_types___numerictypes__.md)), and how arrays are displayed ([`arrayprint`](05_array_printing___arrayprint__.md)).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Umath Module"
parent: "NumPy Core"
nav_order: 7
---
# Chapter 7: umath Module # Chapter 7: umath Module
Welcome to Chapter 7! In [Chapter 6: multiarray Module](06_multiarray_module.md), we explored the core C engine that defines the `ndarray` object and handles fundamental operations like creating arrays and accessing elements. We saw that the actual power comes from C code. Welcome to Chapter 7! In [Chapter 6: multiarray Module](06_multiarray_module.md), we explored the core C engine that defines the `ndarray` object and handles fundamental operations like creating arrays and accessing elements. We saw that the actual power comes from C code.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "__array_function__ Protocol (overrides)"
parent: "NumPy Core"
nav_order: 8
---
# Chapter 8: __array_function__ Protocol / Overrides (`overrides`) # Chapter 8: __array_function__ Protocol / Overrides (`overrides`)
Welcome to the final chapter of our NumPy Core exploration! In [Chapter 7: umath Module](07_umath_module.md), we learned how NumPy implements its fast, element-wise mathematical functions (`ufuncs`) using optimized C code. We've seen the core components: the `ndarray` container, `dtype` descriptions, `ufunc` operations, numeric types, printing, and the C modules (`multiarray`, `umath`) that power them. Welcome to the final chapter of our NumPy Core exploration! In [Chapter 7: umath Module](07_umath_module.md), we learned how NumPy implements its fast, element-wise mathematical functions (`ufuncs`) using optimized C code. We've seen the core components: the `ndarray` container, `dtype` descriptions, `ufunc` operations, numeric types, printing, and the C modules (`multiarray`, `umath`) that power them.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "LLM"
parent: "OpenManus"
nav_order: 1
---
# Chapter 1: The LLM - Your Agent's Brainpower # Chapter 1: The LLM - Your Agent's Brainpower
Welcome to the OpenManus tutorial! We're thrilled to have you on board. Let's start with the absolute core of any intelligent agent: the "brain" that does the thinking and understanding. In OpenManus, this brainpower comes from something called a **Large Language Model (LLM)**, and we interact with it using our `LLM` class. Welcome to the OpenManus tutorial! We're thrilled to have you on board. Let's start with the absolute core of any intelligent agent: the "brain" that does the thinking and understanding. In OpenManus, this brainpower comes from something called a **Large Language Model (LLM)**, and we interact with it using our `LLM` class.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Message & Memory"
parent: "OpenManus"
nav_order: 2
---
# Chapter 2: Message / Memory - Remembering the Conversation # Chapter 2: Message / Memory - Remembering the Conversation
In [Chapter 1: The LLM - Your Agent's Brainpower](01_llm.md), we learned how our agent uses the `LLM` class to access its "thinking" capabilities. But just like humans, an agent needs to remember what was said earlier in a conversation to make sense of new requests and respond appropriately. In [Chapter 1: The LLM - Your Agent's Brainpower](01_llm.md), we learned how our agent uses the `LLM` class to access its "thinking" capabilities. But just like humans, an agent needs to remember what was said earlier in a conversation to make sense of new requests and respond appropriately.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "BaseAgent"
parent: "OpenManus"
nav_order: 3
---
# Chapter 3: BaseAgent - The Agent Blueprint # Chapter 3: BaseAgent - The Agent Blueprint
In the previous chapters, we learned about the "brain" ([Chapter 1: The LLM](01_llm.md)) that powers our agents and how they remember conversations using [Chapter 2: Message / Memory](02_message___memory.md). Now, let's talk about the agent itself! In the previous chapters, we learned about the "brain" ([Chapter 1: The LLM](01_llm.md)) that powers our agents and how they remember conversations using [Chapter 2: Message / Memory](02_message___memory.md). Now, let's talk about the agent itself!
@@ -114,7 +121,7 @@ What actually happens when you call `agent.run()`? The `BaseAgent` provides a st
9. **Finalize:** Once the loop finishes (either `max_steps` reached or state changed to `FINISHED`/`ERROR`), it sets the state back to `IDLE` (unless it ended in `ERROR`). 9. **Finalize:** Once the loop finishes (either `max_steps` reached or state changed to `FINISHED`/`ERROR`), it sets the state back to `IDLE` (unless it ended in `ERROR`).
10. **Return Results:** It returns a string summarizing the results from all the steps. 10. **Return Results:** It returns a string summarizing the results from all the steps.
Heres a simplified diagram showing the flow: Here's a simplified diagram showing the flow:
```mermaid ```mermaid
sequenceDiagram sequenceDiagram

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Tool & ToolCollection"
parent: "OpenManus"
nav_order: 4
---
# Chapter 4: Tool / ToolCollection - Giving Your Agent Skills # Chapter 4: Tool / ToolCollection - Giving Your Agent Skills
In [Chapter 3: BaseAgent - The Agent Blueprint](03_baseagent.md), we learned how `BaseAgent` provides the standard structure for our agents, including a brain ([LLM](01_llm.md)) and memory ([Message / Memory](02_message___memory.md)). But what if we want our agent to do more than just *think* and *remember*? What if we want it to *act* in the world like searching the web, running code, or editing files? In [Chapter 3: BaseAgent - The Agent Blueprint](03_baseagent.md), we learned how `BaseAgent` provides the standard structure for our agents, including a brain ([LLM](01_llm.md)) and memory ([Message / Memory](02_message___memory.md)). But what if we want our agent to do more than just *think* and *remember*? What if we want it to *act* in the world like searching the web, running code, or editing files?

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "BaseFlow"
parent: "OpenManus"
nav_order: 5
---
# Chapter 5: BaseFlow - Managing Multi-Step Projects # Chapter 5: BaseFlow - Managing Multi-Step Projects
In [Chapter 4: Tool / ToolCollection](04_tool___toolcollection.md), we saw how to give agents specific skills like web searching or running code using Tools. Now, imagine you have a task that requires multiple steps, maybe even using different skills (tools) or agents along the way. How do you coordinate this complex work? In [Chapter 4: Tool / ToolCollection](04_tool___toolcollection.md), we saw how to give agents specific skills like web searching or running code using Tools. Now, imagine you have a task that requires multiple steps, maybe even using different skills (tools) or agents along the way. How do you coordinate this complex work?