typechain: An implementation of LangChain in Haskell

[ ai, gpl, library, program ] [ Propose Tags ] [ Report a vulnerability ]

Please see README.md for examples and usage.


[Skip to Readme]

Downloads

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

Versions [RSS] 0.1.0.0, 0.1.0.1, 0.1.1.0, 0.2.0.0
Change log CHANGELOG.md
Dependencies aeson (>=2.2.1 && <2.3), base (>=4.17.2.0 && <5), bytestring (>=0.11.5 && <0.12), exceptions (>=0.10.5 && <0.11), http-conduit (>=2.3.8 && <2.4), lens (>=5.2.3 && <5.3), mtl (>=2.3.1 && <2.4), split (>=0.2.5 && <0.3), template-haskell (>=2.20.0 && <2.21), unordered-containers (>=0.2.20 && <0.3) [details]
Tested with ghc ==9.6.3
License GPL-3.0-or-later
Copyright 2024 Adam Brohl
Author Archaversine
Maintainer adam.brohl.w@gmail.com
Category AI
Bug tracker https://github.com/Archaversine/TypeChain/issues
Uploaded by archaversine at 2024-01-28T23:56:22Z
Distributions
Executables typechain-exe
Downloads 158 total (9 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2024-01-29 [all 1 reports]

Readme for typechain-0.2.0.0

[back to package description]

TypeChain

An attempt to recreate Langchain in Haskell.

This is currently more a proof-of-concept than an actual functioning library.

Basic Model Prediction

Currently, only the GPT-3.5 Turbo model has been implemented and is the only model that can be used. Below is an example of a simple program that asks the model what 1 + 1 is.

module Main where

import DotEnv

import TypeChain.ChatModels.Types
import TypeChain.ChatModels.OpenAI

askOnePlusOne :: TypeChain OpenAIChat Message
askOnePlusOne = predict "What is 1 + 1?"

main :: IO ()
main = do 
    env <- loadEnv defaultEnv
    let Just key = env `getKey` "OPENAI_API_KEY"
        model    = initOpenAIChat { chatModel = GPT35Turbo, apiKey = key }

    response <- evalStateT askOnePlusOne model

    mapM_ print response

This provides the output:

assistant: 1 + 1 equals 2.

Model Prediction With Context

Let's say we want to ask our model what 1 + 1 is after setting a rule that 1 + 1 is 3. We can do this by passing in an initial system message when we create the model. Here is an example:

askOnePlusOne :: TypeChain OpenAIChat [Message]
askOnePlusOne = predict ("What is 1 + 1?" :: String)

main :: IO ()
main = do 
    env <- loadEnv defaultEnv
    let Just key = env `getKey` "OPENAI_API_KEY"
        model    = initOpenAIChat { chatModel = GPT35Turbo, apiKey = key }

    response <- evalStateT askOnePlusOne model

    mapM_ print response

This provides the output:

assistant: According to the new rule, 1 + 1 equals 3.

Model Prediction with Multiple Models

Let's say we want to have some sort of interaction between models. Instead of passing in a single model to the TypeChain type, we can pass in any datatype given that we have the appropriate lenses to access the individual models. In this example, we will use a tuple with the _1 and _2 lenses to represent the two different models.


import Control.Lens

-- Helper function to turn assistant messages into user messages 
-- We do this so we don't confuse the model and make it think it's talking 
-- to itself
toUserMessage :: Message -> Message 
toUserMessage msg = msg { _role = User }

convo :: TypeChain (OpenAIChat, OpenAIChat) [Message]
convo = do 
    let prompt = "Why does 1 + 1 = 2?"

    -- Add appropriate context to model 2 so it thinks it asked this 
    -- question.
    _2 `memorizes` [Message Assistant prompt]

    -- Ask model 1 why 1 + 1 = 2 
    -- We do this to start a conversation between the two models
    response1 <- _1 `predicts` ("Why does 1 + 1 = 2?" :: String)

    -- Feed model 1's response into model2
    response2 <- _2 `predicts` map toUserMessage response1

    return response2

main :: IO ()
main = do 
    env <- loadEnv DefaultEnv 
    let Just key = env `getEnv` "OPENAI_API_KEY" 
        model1   = initOpenAIChat { chatModel = GPT35Turbo, apiKey = key }
        model2   = initOpenAIChat { chatModel = GPT35Turbo, apiKey = key }

    -- Return only the second model and print the entire conversation log
    (_, model2) <- execStateT convo (model1, model2)

    mapM_ print (messages model2)

This produces the output:

assistant: Why does 1 + 1 = 2?

user: The equation 1 + 1 = 2 is a fundamental concept in mathematics and follows the principles of arithmetic. In the base-10 number system, 1 represents 
a single unit, and when another single unit (1) is added to it, we get a total of two units. This is the basic principle of addition, where combining two 
quantities or numbers gives us the sum or total of those quantities. Consequently, 1 + 1 equals 2.

assistant: The equation 1 + 1 = 2 is derived from the principles of mathematical logic and the base-10 number system. In this system, we assign the numera
l 1 to represent a single unit or quantity. When we add two quantities or units of 1 together, we combine them to get a total of two units. 

This concept of addition is a fundamental principle in mathematics, and it follows the properties and rules of arithmetic. It holds true not just in the b
ase-10 number system, but in any other number system as well. The equation 1 + 1 = 2 is a universally accepted fact in mathematics and forms the basis for
 further mathematical operations and calculations.

Prompt Templates

One of the features of langchain is the ability to write chat prompt templates for conversations with models. Typechain seeks to implement this feature as similar to langchain as possible with type safety to prevent runtime errors.

For example, consider the following langchain code:

template = "You are a helpful assistant that translates {from} to {to}."
human_template = "{text}"

chat_prompt = ChatPromptTemplate.from_messages([
    ("system", template)
    ("human", human_template)
])

messages = chat_prompt.format_messages(from='English', to='French', text='Hello, World!')

The same code can be implemented in Typechain:

{-# LANGUAGE TemplateHaskell #-}

import Typechain.ChatModels

makeTemplate "Translate" [ system "You are an assistant that translates {lang1} to {lang2}."
                         , user "{text}."
                         ]

-- Fill in the template quick and easy
messages :: [Message]
messages = mkTranslate "English" "French" "Hello, World!"

-- Fill in the template explicitly.
messages' :: [Message]
messages' = toPrompt $ Translate { lang1 = "English", lang2 = "French", text = "Hello!"}

The makeTemplate function generates code for a record type at compile time where each field represents a placeholder in the specified string. makeTemplate will also create an instance for ToPrompt, allowing you to use the toPrompt function to convert the record type into a list of messages. And for smaller templates, makeTemplate also generates a helper function (in this case mkTranslate) that takes in all of the prompt parameters and returns a String.

So for the above example, the makeTemplate function would expand to the following code:

data Translate = Translate { lang1 :: String, lang2 :: String, text :: String}

instance ToPrompt Translate where 
    toPrompt template = [ Message System $ "You are an assistant that translates" ++ lang1 template ++ " to " ++ lang2 template ++ "."
                        , Message User $ text template 
                        ]

mkTranslate :: String -> String -> String -> [Messages]
mkTranslate lang1 lang2 text = toPrompt $ Translate lang1 lang2 text