Posts

Showing posts from April, 2018

writing some manual test methods for binary suffix string tree

Image
I am waiting for a python machine learning book to get here via the Amazon fairy, so in the meantime I decided to play with the exercise from class I left off on. The goal I had in mind was to put words as the key, and a suffix list as the value into the binary tree.  Then add the binary searchs through the suffix list into the tree. So someone could search through the tree for the suffix they were looking for, and get any key value that had a matching suffix in their suffix list. If I had a tree of DNA strings say.... and I wanted to make a tree, and find all the key DNA sequences that contained 'AcTGAT'  , it could return me a list of them. I don't have it accomplished yet, but I just started doing this to get my head back into this files code. I find it extremely useful to just tinker around with stuff.  See what I can make it do.  Might not be useful to whoever is reading this, but I find it fun and worthwhile. Picture: Using my PrintTree class I made a ways

more nltk tinkering

Found a way to kind of filter out verbs I didn't want.  Also came up with a few more methods to play with nltk and it's different things. Disclaimer, I've manually tested with print statements and such, but have not yet written a 'pytest' for it. Updated nltk0_ex.py file: #!usr/bin/python3 # -*- coding: utf-8 -*- import sys import re from nltk.corpus import wordnet from random import randint import nltk as nltk # place script1, script2,  sys.argv[]  here #script1 = sys.argv[1] #script2 = sys.argv[1] """   Requires:   above imports and :   install - nltk   install - python3   In your python3 shell type these to download needed data sets:   >>>import nltk   >>>nltk.download('wordnet')   >>>nltk.download('punkt')   >>>nltk.download('averaged_perceptron_tagger')   make_noun_response() -- requires script1 as sys.argv   make_verb_response() -- requires scr

methods to play with nltk so far

Image
Update: Found some new things to do....  Putting them on next post, with updated code. I'm not having much luck doing anything useful with the verbs of the user (input).  I can't seem to find anything in the nltk stuff online that might suggest they have a way of distinguishing 'action verbs' from the other verbs.  Action verbs would be very useful. I'll make some pictures to show off a bit.   The script1.txt, and script2.txt I'm using as the sys.argv[1] are simple files with questions.  I'll do a pic of that too. So, here is my tinkering so far.  And it is really fun to play with.  Maybe I'm odd, I think it's fun to pretend my computer has my quirky sense of humor. Pictures: noun response method: make_noun_response() verb response method: make_verb_response()  script1.txt noun response questions Code: #!usr/bin/python3 # -*- coding: utf-8 -*- import sys from nltk.corpus import wordnet from random im

to make a script or not

Image
Still playing with the idea of making a script file for my Wiwa to read from... it wouldn't be that hard... A pic of me playing with it.   She only has 6 questions so far to play with: Here's a little code a worked up to test out the theory: It will need some error proofing, but,  It seems like I could just pop a file full of scripted questions for her to ask, and have the bot pick at random from the text file and print them off. Problem is, if she picks a lot of the same questions over and over, which random does do sometimes, it won't seem very real.  But, still playing with all the ideas. Also, going to play with putting questions into an SQL db, and have her get them from there.... I don't know that it's better....  a text file is very hackable, but takes up so little space, and it's all for fun and learning at the moment. We'll see. A text file would also mean someone change her to their whim.  She'd be easily modifi