Patent:Tactilemessagesinanextendedrealityenvironment
PublicationNumber:20230393659
PublicationDate:2023-12-07
Assignee:MetaPlatformsTechnologies
Abstract
Techniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Inoneparticularaspect,anextendedrealitysystemisprovidedhavingahead-mounteddevicewithadisplaytodisplaycontenttoafirstuser,sensorstocaptureinputdata,processors,andmemoriesaccessibletotheprocessors,thememoriesstoringinstructionsexecutablebytheprocessorstoperformprocessingincluding:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,wherethedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.
Claims
Whatisclaimedis:
1.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication;identifyinganemojifromalexiconofemojisbasedontheextractedfeatures;obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput;andtransmittingthedigitalassetstoadeviceofaseconduser.
2.Theextendedrealitysystemofclaim1,whereintheextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures;andwhereintheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.
3.Theextendedrealitysystemofclaim1,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.
4.Theextendedrealitysystemofclaim1,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.
5.Theextendedrealitysystemofclaim1,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.
6.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.
7.Theextendedrealitysystemofclaim6,whereintheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.
8.Theextendedrealitysystemofclaim6,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.
9.Theextendedrealitysystemofclaim6,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.
10.Theextendedrealitysystemofclaim7,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.
12.Theextendedrealitysystemofclaim11,whereintheparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.
13.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.
14.Theextendedrealitysystemofclaim12,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.
15.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.
16.Theextendedrealitysystemofclaim11,whereinthehapticsignalispredictedbasedoninputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtheinputdataiscapturedfromahead-mounteddeviceoftheseconduser.
17.Theextendedrealitysystemofclaim11,whereinthehapticsignalispartofdigitalassetsobtainedforanemojiidentifiedfromalexiconofemojis.
18.Theextendedrealitysystemofclaim17,whereintheemojiisidentifiedfromalexiconofemojisbasedonextractedfeaturesfrominputdatathatcorrespondtoanelectroniccommunication,andtheinputdataiscapturedfromahead-mounteddeviceofaseconduser.
19.Theextendedrealitysystemofclaim17,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.
20.Theextendedrealitysystemofclaim17,whereinthehapticsignalfortheemojiistransmittedtothehead-mounteddeviceofthefirstuser.
Description
CROSS-REFERENCETORELATEDAPPLICATION
Thepresentapplicationisanon-provisionalapplicationofandclaimsthebenefitandpriorityunder35U.S.C.119(e)ofU.S.ProvisionalApplicationNo.63/365,689,filedJun.1,2022,theentirecontentsofwhichisincorporatedhereinbyreferenceforallpurposes.
FIELD
Thepresentdisclosurerelatesgenerallytohapticcommunicationinanextendedrealityenvironment,andmoreparticularly,totechniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.
BACKGROUND
BRIEFSUMMARY
Techniquesdisclosedhereinrelategenerallytohapticcommunicationinanextendedrealityenvironment.Morespecificallyandwithoutlimitation,techniquesdisclosedhereinrelatetosendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Hapticemojisorreactionsaretactilemessagesthatcanbesentandreceivedthroughoutthedaywithawearabledevice(e.g.,hapticgloveorwristband).Eachhapticemojiorreactionmaybeaccompaniedbyaudioand/orvisualcomponentstohelptrainauseronthehapticsignals.Thetactilemessagescanbesentthroughtraditionaluserinterfaces,hapticfirstinterfaces,ormoreexpressivegesturessuchasahandwave,whereinthisexampletherecipientmayfeelahapticpatterntomimicawavemotion.
Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.
Insomeembodiments,theextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures,andtheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.
Insomeembodiments,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.
Insomeembodiments,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.
Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.
Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser,oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.
Insomeembodiments,thehapticemojiispredictedandtheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.
Insomeembodiments,theparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.
Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.
Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.
Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.
Someembodimentsofthepresentdisclosureincludeacomputer-implementedmethodcomprisingpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.
Someembodimentsofthepresentdisclosureincludeasystemincludingoneormoredataprocessors.Insomeembodiments,thesystemincludesanon-transitorycomputerreadablestoragemediumcontaininginstructionswhich,whenexecutedontheoneormoredataprocessors,causetheoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.
Someembodimentsofthepresentdisclosureincludeacomputer-programproducttangiblyembodiedinanon-transitorymachine-readablestoragemedium,includinginstructionsconfiguredtocauseoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.
BRIEFDESCRIPTIONOFTHEDRAWINGS
FIG.1isasimplifiedblockdiagramofanetworkenvironmentinaccordancewithvariousembodiments.
FIG.2Aanillustrationdepictinganexampleextendedrealitysystemthatpresentsandcontrolsuserinterfaceelementswithinanextendedrealityenvironmentinaccordancewithvariousembodiments.
FIG.2Banillustrationdepictinguserinterfaceelementsinaccordancewithvariousembodiments.
FIG.3Aisanillustrationofanaugmentedrealitysysteminaccordancewithvariousembodiments.
FIG.3Bisanillustrationofavirtualrealitysysteminaccordancewithvariousembodiments.
FIG.4Aisanillustrationofhapticdevicesinaccordancewithvariousembodiments.
FIG.4Bisanillustrationofanexemplaryvirtualrealityenvironmentinaccordancewithvariousembodiments.
FIG.4Cisanillustrationofanexemplaryaugmentedrealityenvironmentinaccordancewithvariousembodiments.
FIG.5isasimplifiedblockdiagramofasocialcommunicationplatforminaccordancewithvariousembodiments.
FIG.6Aisasimplifiedblockdiagramillustratingasocialcommunicationsystemforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.
FIG.6Bisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.
FIG.6Cisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.
FIG.7isaflowchartillustratingaprocessforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.
FIG.8isasimplifiedblockdiagramillustratingamachine-learningpredictionsysteminaccordancewithvariousembodiments.
FIG.9isaflowchartillustratingaprocesstopredicthapticemojisforconveyingatouchmessageinaccordancewithvariousembodiments.
FIG.10isasimplifiedblockdiagramillustratingasocialcommunicationsystemforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.
FIG.11isaflowchartillustratingaprocessforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.
FIG.12isasimplifiedblockdiagramillustratingasignalgeneratorforoperatingcutaneousactuatorstodeliverhapticoutput(tactilefeedback)toauserinaccordancewithvariousembodiments.
FIG.13isaflowchartillustratingaprocessforgeneratingahapticoutputinaccordancewithvariousembodiments.
DETAILEDDESCRIPTION
Inthefollowingdescription,forthepurposesofexplanation,specificdetailsaresetforthinordertoprovideathoroughunderstandingofcertainembodiments.However,itwillbeapparentthatvariousembodimentsmaybepracticedwithoutthesespecificdetails.Thefiguresanddescriptionarenotintendedtoberestrictive.Theword“exemplary”isusedhereintomean“servingasanexample,instance,orillustration.”Anyembodimentordesigndescribedhereinas“exemplary”isnotnecessarilytobeconstruedaspreferredoradvantageousoverotherembodimentsordesigns.
INTRODUCTION
Inanotherexemplaryembodiment,anextendedrealitysystemisprovidedcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.
Advantageously,thetactilemessagesaremoreexpressivethanvisualoraudiobasedmessages,andareparticularlyusefulwhenausercan'tvieworlistentovisualoraudiobasedmessages.
ExtendedRealitySystemOverview
Thisdisclosurecontemplatesanysuitablenetwork120.Asanexampleandnotbywayoflimitation,oneormoreportionsofanetwork120mayincludeanadhocnetwork,anintranet,anextranet,avirtualprivatenetwork(VPN),alocalareanetwork(LAN),awirelessLAN(WLAN),awideareanetwork(WAN),awirelessWAN(WWAN),ametropolitanareanetwork(MAN),aportionoftheInternet,aportionofthePublicSwitchedTelephoneNetwork(PSTN),acellulartelephonenetwork,oracombinationoftwoormoreofthese.Anetwork120mayincludeoneormorenetworks120.
Links125mayconnectaclientsystem105,avirtualassistantengine110,andaremotesystem115toacommunicationnetwork110ortoeachother.Thisdisclosurecontemplatesanysuitablelinks125.Inparticularembodiments,oneormorelinks125includeoneormorewireline(suchasforexampleDigitalSubscriberLine(DSL)orDataOverCableServiceInterfaceSpecification(DOCSIS)),wireless(suchasforexampleWi-FiorWorldwideInteroperabilityforMicrowaveAccess(WiMAX)),oroptical(suchasforexampleSynchronousOpticalNetwork(SONET)orSynchronousDigitalHierarchy(SDH))links.Inparticularembodiments,oneormorelinks125eachincludeanadhocnetwork,anintranet,anextranet,aVPN,aLAN,aWLAN,aWAN,aWWAN,aMAN,aportionoftheInternet,aportionofthePSTN,acellulartechnology-basednetwork,asatellitecommunicationstechnology-basednetwork,anotherlink125,oracombinationoftwoormoresuchlinks125.Links125neednotnecessarilybethesamethroughoutanetworkenvironment100.Oneormorefirstlinks125maydifferinoneormorerespectsfromoneormoresecondlinks125.
Invariousembodiments,aclientsystem105isanelectronicdeviceincludinghardware,software,orembeddedlogiccomponentsoracombinationoftwoormoresuchcomponentsandcapableofcarryingouttheappropriateextendedrealityfunctionalitiesinaccordancewithtechniquesofthedisclosure.Asanexample,andnotbywayoflimitation,aclientsystem105mayincludeadesktopcomputer,notebookorlaptopcomputer,netbook,atabletcomputer,e-bookreader,GPSdevice,camera,personaldigitalassistant(PDA),handheldelectronicdevice,cellulartelephone,smartphone,aVR.MR,AR,orVRheadsetsuchasanAR/VRHMD,othersuitableelectronicdevicecapableofdisplayingextendedrealitycontent,oranysuitablecombinationthereof.Inparticularembodiments,theclientsystem105isanAR/VRHMDasdescribedindetailwithrespecttoFIG.2.Thisdisclosurecontemplatesanysuitableclientsystem105configuredtogenerateandoutputextendedrealitycontenttotheuser.Theclientsystem105mayenableitsusertocommunicatewithotherusersatotherclientsystems105.
Auserattheclientsystem105mayusethevirtualassistantapplication130tointeractwiththevirtualassistantengine110.Insomeinstances,thevirtualassistantapplication130isastand-aloneapplicationorintegratedintoanotherapplicationsuchasasocial-networkingapplicationoranothersuitableapplication(e.g.,anartificialsimulationapplication).Insomeinstances,thevirtualassistantapplication130isintegratedintotheclientsystem105(e.g.,partoftheoperatingsystemoftheclientsystem105),anassistanthardwaredevice,oranyothersuitablehardwaredevices.Insomeinstances,thevirtualassistantapplication130maybeaccessedviaawebbrowser135.Insomeinstances,thevirtualassistantapplication130passivelylistenstoandwatchesinteractionsoftheuserinthereal-world,andprocesseswhatithearsandsees(e.g.,explicitinputsuchasaudiocommandsorinterfacecommands,contextualawarenessderivedfromaudioorphysicalactionsoftheuser,objectsinthereal-world,environmentaltriggerssuchasweatherortime,andthelike)inordertointeractwiththeuserinanintuitivemanner.
Invariousembodiments,aremotesystem115mayincludeoneormoretypesofservers,oneormoredatastores,oneormoreinterfaces,includingbutnotlimitedtoAPIs,oneormorewebservices,oneormorecontentsources,oneormorenetworks,oranyothersuitablecomponents,e.g.,thatserversmaycommunicatewith.Aremotesystem115maybeoperatedbyasameentityoradifferententityfromanentityoperatingthevirtualassistantengine110.Inparticularembodiments,however,thevirtualassistantengine110andthird-partysystems115mayoperateinconjunctionwitheachothertoprovidevirtualcontenttousersoftheclientsystem105.Forexample,asocial-networkingsystem145mayprovideaplatform,orbackbone,whichothersystems,suchasthird-partysystems,mayusetoprovidesocial-networkingservicesandfunctionalitytousersacrosstheInternet,andthevirtualassistantengine110mayaccessthesesystemstoprovidevirtualcontentontheclientsystem105.
Theremotesystem115mayincludeacontentobjectprovider150.Acontentobjectprovider150includesoneormoresourcesofvirtualcontentobjects,whichmaybecommunicatedtotheclientsystem105.Asanexample,andnotbywayoflimitation,virtualcontentobjectsmayincludeinformationregardingthingsoractivitiesofinteresttotheuser,suchas,forexample,movieshowtimes,moviereviews,restaurantreviews,restaurantmenus,productinformationandreviews,instructionsonhowtoperformvarioustasks,exerciseregimens,cookingrecipes,orothersuitableinformation.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludeincentivecontentobjects,suchascoupons,discounttickets,giftcertificates,orothersuitableincentiveobjects.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludevirtualobjectssuchasvirtualinterfaces,2Dor3Dgraphics,mediacontent,orothersuitablevirtualobjects.
IntheexampleshowninFIG.2A,virtualinformationorobjects240,245aremappedatapositionrelativetoaphysicalobject235.Asshouldbeunderstood,thevirtualimagery(e.g.,virtualcontentsuchasinformationorobjects240,245andvirtualuserinterface250)doesnotexistinthereal-world,physicalenvironment.Virtualuserinterface250maybefixed,asrelativetotheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentsuchasvirtualinformationorobjects240,245,forinstance.Asaresult,clientsystem200renders,atauserinterfacepositionthatislockedrelativetoapositionoftheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentintheextendedrealityenvironment,virtualuserinterface250fordisplayatextendedrealitysystem205aspartofextendedrealitycontent225.Asusedherein,avirtualelement‘locked’toapositionofvirtualcontentorphysicalobjectisrenderedatapositionrelativetothepositionofthevirtualcontentorphysicalobjectsoastoappeartobepartoforotherwisetiedintheextendedrealityenvironmenttothevirtualcontentorphysicalobject.
Clientsystem200maytriggergenerationandrenderingofvirtualcontentbasedonacurrentfieldofviewofuser220,asmaybedeterminedbyreal-timegaze255trackingoftheuser,orotherconditions.Morespecifically,imagecapturedevicesofthesensors215captureimagedatarepresentativeofobjectsintherealworld,physicalenvironmentthatarewithinafieldofviewofimagecapturedevices.Duringoperation,theclientsystem200performsobjectrecognitionwithinimagedatacapturedbytheimagecapturedevicesofextendedrealitysystem205toidentifyobjectsinthephysicalenvironmentsuchastheuser220,theuser'shand230,and/orphysicalobjects235.Further,theclientsystem200trackstheposition,orientation,andconfigurationoftheobjectsinthephysicalenvironmentoveraslidingwindowoftime.Fieldofviewtypicallycorrespondswiththeviewingperspectiveoftheextendedrealitysystem205.Insomeexamples,theextendedrealityapplicationpresentsextendedrealitycontent225comprisingmixedrealityand/oraugmentedreality.
Variousembodimentsdisclosedhereinmayincludeorbeimplementedinconjunctionwithvarioustypesofextendedrealitysystems.Extendedrealitycontentgeneratedbytheextendedrealitysystemsmayincludecompletelycomputer-generatedcontentorcomputer-generatedcontentcombinedwithcaptured(e.g.,real-world)content.Theextendedrealitycontentmayincludevideo,audio,hapticfeedback,orsomecombinationthereof,anyofwhichmaybepresentedinasinglechannelorinmultiplechannels(suchasstereovideothatproducesathree-dimensional(3D)effecttotheviewer).Additionally,insomeembodiments,extendedrealitymayalsobeassociatedwithapplications,products,accessories,services,orsomecombinationthereof,thatareusedto,forexample,createcontentinanextendedrealityand/orareotherwiseusedin(e.g.,toperformactivitiesin)anextendedreality.
Theextendedrealitysystemsmaybeimplementedinavarietyofdifferentformfactorsandconfigurations.Someextendedrealitysystemsmaybedesignedtoworkwithoutnear-eyedisplays(NEDs).OtherextendedrealitysystemsmayincludeanNEDthatalsoprovidesvisibilityintotherealworld(suchas,e.g.,augmentedrealitysystem300inFIG.3A)orthatvisuallyimmersesauserinanextendedreality(suchas,e.g.,virtualrealitysystem350inFIG.3B).Whilesomeextendedrealitydevicesmaybeself-containedsystems,otherextendedrealitydevicesmaycommunicateand/orcoordinatewithexternaldevicestoprovideanextendedrealityexperiencetoauser.Examplesofsuchexternaldevicesincludehandheldcontrollers,mobiledevices,desktopcomputers,deviceswornbyauser,deviceswornbyoneormoreotherusers,and/oranyothersuitableexternalsystem.
AsshowninFIG.3A,augmentedrealitysystem300mayincludeaneyeweardevice305withaframe310configuredtoholdaleftdisplaydevice315(A)andarightdisplaydevice315(B)infrontofauser'seyes.Displaydevices315(A)and315(B)mayacttogetherorindependentlytopresentanimageorseriesofimagestoauser.Whileaugmentedrealitysystem300includestwodisplays,embodimentsofthisdisclosuremaybeimplementedinaugmentedrealitysystemswithasingleNEDormorethantwoNEDs.
Insomeembodiments,augmentedrealitysystem300mayincludeoneormoresensors,suchassensor320.Sensor320maygeneratemeasurementsignalsinresponsetomotionofaugmentedrealitysystem300andmaybelocatedonsubstantiallyanyportionofframe310.Sensor320mayrepresentoneormoreofavarietyofdifferentsensingmechanisms,suchasapositionsensor,aninertialmeasurementunit(IMU),adepthcameraassembly,astructuredlightemitterand/ordetector,oranycombinationthereof.Insomeembodiments,augmentedrealitysystem300mayormaynotincludesensor320ormayincludemorethanonesensor.Inembodimentsinwhichsensor320includesanIMU,theIMUmaygeneratecalibrationdatabasedonmeasurementsignalsfromsensor320.Examplesofsensor320mayinclude,withoutlimitation,accelerometers,gyroscopes,magnetometers,othersuitabletypesofsensorsthatdetectmotion,sensorsusedforerrorcorrectionoftheIMU,orsomecombinationthereof.
Insomeexamples,augmentedrealitysystem300mayalsoincludeamicrophonearraywithapluralityofacoustictransducers325(A)-325(J),referredtocollectivelyasacoustictransducers325.Acoustictransducers325mayrepresenttransducersthatdetectairpressurevariationsinducedbysoundwaves.Eachacoustictransducer325maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(e.g.,ananalogordigitalformat).ThemicrophonearrayinFIG.3Amayinclude,forexample,tenacoustictransducers:325(A)and325(B),whichmaybedesignedtobeplacedinsideacorrespondingearoftheuser,acoustictransducers325(C),325(D),325(E),325(F),325(G),and325(H),whichmaybepositionedatvariouslocationsonframe310,and/oracoustictransducers325(I)and325(J),whichmaybepositionedonacorrespondingneckband330.
Insomeembodiments,oneormoreofacoustictransducers325(A)-(J)maybeusedasoutputtransducers(e.g.,speakers).Forexample,acoustictransducers325(A)and/or325(B)maybeearbudsoranyothersuitabletypeofheadphoneorspeaker.Theconfigurationofacoustictransducers325ofthemicrophonearraymayvary.Whileaugmentedrealitysystem300isshowninFIG.3ashavingtenacoustictransducers325,thenumberofacoustictransducers325maybegreaterorlessthanten.Insomeembodiments,usinghighernumbersofacoustictransducers325mayincreasetheamountofaudioinformationcollectedand/orthesensitivityandaccuracyoftheaudioinformation.Incontrast,usingalowernumberofacoustictransducers325maydecreasethecomputingpowerrequiredbyanassociatedcontroller335toprocessthecollectedaudioinformation.Inaddition,thepositionofeachacoustictransducer325ofthemicrophonearraymayvary.Forexample,thepositionofanacoustictransducer325mayincludeadefinedpositionontheuser,adefinedcoordinateonframe310,anorientationassociatedwitheachacoustictransducer325,orsomecombinationthereof.
Acoustictransducers325(A)and325(B)maybepositionedondifferentpartsoftheuser'sear,suchasbehindthepinna,behindthetragus,and/orwithintheauricleorfossa.Or,theremaybeadditionalacoustictransducers325onorsurroundingtheearinadditiontoacoustictransducers325insidetheearcanal.Havinganacoustictransducer325positionednexttoanearcanalofausermayenablethemicrophonearraytocollectinformationonhowsoundsarriveattheearcanal.Bypositioningatleasttwoofacoustictransducers325oneithersideofauser'shead(e.g.,asbinauralmicrophones),augmentedrealitysystem300maysimulatebinauralhearingandcapturea3Dstereosoundfieldaroundaboutauser'shead.Insomeembodiments,acoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawiredconnection340,andinotherembodimentsacoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawirelessconnection(e.g.,aBluetoothconnection).Instillotherembodiments,acoustictransducers325(A)and325(B)maynotbeusedatallinconjunctionwithaugmentedrealitysystem300.
Acoustictransducers325onframe310maybepositionedinavarietyofdifferentways,includingalongthelengthofthetemples,acrossthebridge,aboveorbelowdisplaydevices315(A)and315(B),orsomecombinationthereof.Acoustictransducers325mayalsobeorientedsuchthatthemicrophonearrayisabletodetectsoundsinawiderangeofdirectionssurroundingtheuserwearingtheaugmentedrealitysystem300.Insomeembodiments,anoptimizationprocessmaybeperformedduringmanufacturingofaugmentedrealitysystem300todeterminerelativepositioningofeachacoustictransducer325inthemicrophonearray.
Insomeexamples,augmentedrealitysystem300mayincludeorbeconnectedtoanexternaldevice(e.g.,apaireddevice),suchasneckband330.Neckband330generallyrepresentsanytypeorformofpaireddevice.Thus,thefollowingdiscussionofneckband330mayalsoapplytovariousotherpaireddevices,suchaschargingcases,smartwatches,smartphones,wristbands,otherwearabledevices,hand-heldcontrollers,tabletcomputers,laptopcomputers,otherexternalcomputedevices,etc.
Asshown,neckband330maybecoupledtoeyeweardevice305viaoneormoreconnectors.Theconnectorsmaybewiredorwirelessandmayincludeelectricaland/ornon-electrical(e.g.,structural)components.Insomecases,eyeweardevice305andneckband330mayoperateindependentlywithoutanywiredorwirelessconnectionbetweenthem.WhileFIG.3Aillustratesthecomponentsofeyeweardevice305andneckband330inexamplelocationsoneyeweardevice305andneckband330,thecomponentsmaybelocatedelsewhereand/ordistributeddifferentlyoneyeweardevice305and/orneckband330.Insomeembodiments,thecomponentsofeyeweardevice305andneckband330maybelocatedononeormoreadditionalperipheraldevicespairedwitheyeweardevice305,neckband330,orsomecombinationthereof.
Neckband330maybecommunicativelycoupledwitheyeweardevice305and/ortootherdevices.Theseotherdevicesmayprovidecertainfunctions(e.g.,tracking,localizing,depthmapping,processing,storage,etc.)toaugmentedrealitysystem300.IntheembodimentofFIG.3A,neckband330mayincludetwoacoustictransducers(e.g.,325(I)and325(J))thatarepartofthemicrophonearray(orpotentiallyformtheirownmicrophonesubarray).Neckband330mayalsoincludeacontroller342andapowersource345.
Acoustictransducers325(I)and325(J)ofneckband330maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(analogordigital).IntheembodimentofFIG.3A,acoustictransducers325(I)and325(J)maybepositionedonneckband330,therebyincreasingthedistancebetweentheneckbandacoustictransducers325(I)and325(J)andotheracoustictransducers325positionedoneyeweardevice305.Insomecases,increasingthedistancebetweenacoustictransducers325ofthemicrophonearraymayimprovetheaccuracyofbeamformingperformedviathemicrophonearray.Forexample,ifasoundisdetectedbyacoustictransducers325(C)and325(D)andthedistancebetweenacoustictransducers325(C)and325(D)isgreaterthan,e.g.,thedistancebetweenacoustictransducers325(D)and325(E),thedeterminedsourcelocationofthedetectedsoundmaybemoreaccuratethanifthesoundhadbeendetectedbyacoustictransducers325(D)and325(E).
Powersource345inneckband330mayprovidepowertoeyeweardevice305and/ortoneckband330.Powersource345mayinclude,withoutlimitation,lithium-ionbatteries,lithium-polymerbatteries,primarylithiumbatteries,alkalinebatteries,oranyotherformofpowerstorage.Insomecases,powersource345maybeawiredpowersource.Includingpowersource345onneckband330insteadofoneyeweardevice305mayhelpbetterdistributetheweightandheatgeneratedbypowersource345.
Asnoted,someextendedrealitysystemsmay,insteadofblendinganextendedrealitywithactualreality,substantiallyreplaceoneormoreofauser'ssensoryperceptionsoftherealworldwithavirtualexperience.Oneexampleofthistypeofsystemisahead-worndisplaysystem,suchasvirtualrealitysystem350inFIG.3B,thatmostlyorcompletelycoversauser'sfieldofview.Virtualrealitysystem350mayincludeafrontrigidbody355andaband360shapedtofitaroundauser'shead.Virtualrealitysystem1700mayalsoincludeoutputaudiotransducers365(A)and365(B).Furthermore,whilenotshowninFIG.3B,frontrigidbody355mayincludeoneormoreelectronicelements,includingoneormoreelectronicdisplays,oneormoreinertialmeasurementunits(IMUs),oneormoretrackingemittersordetectors,and/oranyothersuitabledeviceorsystemforcreatinganextendedrealityexperience.
Inadditiontoorinsteadofusingdisplayscreens,someoftheextendedrealitysystemsdescribedhereinmayincludeoneormoreprojectionsystems.Forexample,displaydevicesinaugmentedrealitysystem300and/orvirtualrealitysystem350mayincludemicro-LEDprojectorsthatprojectlight(using,e.g.,awaveguide)intodisplaydevices,suchasclearcombinerlensesthatallowambientlighttopassthrough.Thedisplaydevicesmayrefracttheprojectedlighttowardauser'spupilandmayenableausertosimultaneouslyviewbothextendedrealitycontentandtherealworld.Thedisplaydevicesmayaccomplishthisusinganyofavarietyofdifferentopticalcomponents,includingwaveguidecomponents(e.g.,holographic,planar,diffractive,polarized,and/orreflectivewaveguideelements),light-manipulationsurfacesandelements(suchasdiffractive,reflective,andrefractiveelementsandgratings),couplingelements,etc.Extendedrealitysystemsmayalsobeconfiguredwithanyothersuitabletypeorformofimageprojectionsystem,suchasretinalprojectorsusedinvirtualretinadisplays.
Theextendedrealitysystemsdescribedhereinmayalsoincludevarioustypesofcomputervisioncomponentsandsubsystems.Forexample,augmentedrealitysystem300and/orvirtualrealitysystem350mayincludeoneormoreopticalsensors,suchastwo-dimensional(2D)or3Dcameras,structuredlighttransmittersanddetectors,time-of-flightdepthsensors,single-beamorsweepinglaserrangefinders,3DLiDARsensors,and/oranyothersuitabletypeorformofopticalsensor.Anextendedrealitysystemmayprocessdatafromoneormoreofthesesensorstoidentifyalocationofauser,tomaptherealworld,toprovideauserwithcontextaboutreal-worldsurroundings,and/ortoperformavarietyofotherfunctions.
Theextendedrealitysystemsdescribedhereinmayalsoincludeoneormoreinputand/oroutputaudiotransducers.Outputaudiotransducersmayincludevoicecoilspeakers,ribbonspeakers,electrostaticspeakers,piezoelectricspeakers,boneconductiontransducers,cartilageconductiontransducers,tragus-vibrationtransducers,and/oranyothersuitabletypeorformofaudiotransducer.Similarly,inputaudiotransducersmayincludecondensermicrophones,dynamicmicrophones,ribbonmicrophones,and/oranyothertypeorformofinputtransducer.Insomeembodiments,asingletransducermaybeusedforbothaudioinputandaudiooutput.
Insomeembodiments,theextendedrealitysystemsdescribedhereinmayalsoincludetactile(e.g.,haptic)feedbacksystems,whichmaybeincorporatedintoheadwear,gloves,bodysuits,handheldcontrollers,environmentaldevices(e.g.,chairs,floormats,etc.),and/oranyothertypeofdeviceorsystem.Hapticfeedbacksystemsmayprovidevarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Hapticfeedbacksystemsmayalsoprovidevarioustypesofkinestheticfeedback,suchasmotionandcompliance.Hapticfeedbackmaybeimplementedusingmotors,piezoelectricactuators,fluidicsystems,and/oravarietyofothertypesoffeedbackmechanisms.Hapticfeedbacksystemsmaybeimplementedindependentofotherextendedrealitydevices,withinotherextendedrealitydevices,and/orinconjunctionwithotherextendedrealitydevices.
Byprovidinghapticsensations,audiblecontent,and/orvisualcontent,extendedrealitysystemsmaycreateanentirevirtualexperienceorenhanceauser'sreal-worldexperienceinavarietyofcontextsandenvironments.Forinstance,extendedrealitysystemsmayassistorextendauser'sperception,memory,orcognitionwithinaparticularenvironment.Somesystemsmayenhanceauser'sinteractionswithotherpeopleintherealworldormayenablemoreimmersiveinteractionswithotherpeopleinavirtualworld.Extendedrealitysystemsmayalsobeusedforeducationalpurposes(e.g.,forteachingortraininginschools,hospitals,governmentorganizations,militaryorganizations,businessenterprises,etc.),entertainmentpurposes(e.g.,forplayingvideogames,listeningtomusic,watchingvideocontent,etc.),and/orforaccessibilitypurposes(e.g.,ashearingaids,visualaids,etc.).Theembodimentsdisclosedhereinmayenableorenhanceauser'sextendedrealityexperienceinoneormoreofthesecontextsandenvironmentsand/orinothercontextsandenvironments.
Asnoted,extendedrealitysystems300and350maybeusedwithavarietyofothertypesofdevicestoprovideamorecompellingextendedrealityexperience.Thesedevicesmaybehapticinterfaceswithtransducersthatprovidehapticfeedbackand/orthatcollecthapticinformationaboutauser'sinteractionwithanenvironment.Theextendedrealitysystemsdisclosedhereinmayincludevarioustypesofhapticinterfacesthatdetectorconveyvarioustypesofhapticinformation,includingtactilefeedback(e.g.,feedbackthatauserdetectsvianervesintheskin,whichmayalsobereferredtoascutaneousfeedback)and/orkinestheticfeedback(e.g.,feedbackthatauserdetectsviareceptorslocatedinmuscles,joints,and/ortendons).
Oneormorevibrotactiledevices420maybepositionedatleastpartiallywithinoneormorecorrespondingpocketsformedintextilematerial415ofvibrotactilesystem400.Vibrotactiledevices420maybepositionedinlocationstoprovideavibratingsensation(e.g.,hapticfeedback)toauserofvibrotactilesystem400.Forexample,vibrotactiledevices420maybepositionedagainsttheuser'sfinger(s),thumb,orwrist,asshowninFIG.4A.Vibrotactiledevices420may,insomeexamples,besufficientlyflexibletoconformtoorbendwiththeuser'scorrespondingbodypart(s).
Apowersource425(e.g.,abattery)forapplyingavoltagetothevibrotactiledevices420foractivationthereofmaybeelectricallycoupledtovibrotactiledevices420,suchasviaconductivewiring430.Insomeexamples,eachofvibrotactiledevices420maybeindependentlyelectricallycoupledtopowersource425forindividualactivation.Insomeembodiments,aprocessor435maybeoperativelycoupledtopowersource425andconfigured(e.g.,programmed)tocontrolactivationofvibrotactiledevices420.
Vibrotactilesystem400mayoptionallyincludeothersubsystemsandcomponents,suchastouch-sensitivepads450,pressuresensors,motionsensors,positionsensors,lightingelements,and/oruserinterfaceelements(e.g.,anon/offbutton,avibrationcontrolelement,etc.).Duringuse,vibrotactiledevices420maybeconfiguredtobeactivatedforavarietyofdifferentreasons,suchasinresponsetotheuser'sinteractionwithuserinterfaceelements,asignalfromthemotionorpositionsensors,asignalfromthetouch-sensitivepads450,asignalfromthepressuresensors,asignalfromtheotherdeviceorsystem440,etc.
Althoughpowersource425,processor435,andcommunicationsinterface445areillustratedinFIG.4Aasbeingpositionedinhapticdevice410,thepresentdisclosureisnotsolimited.Forexample,oneormoreofpowersource425,processor435,orcommunicationsinterface445maybepositionedwithinhapticdevice405orwithinanotherwearabletextile.
Hapticwearables,suchasthoseshowninanddescribedinconnectionwithFIG.4A,maybeimplementedinavarietyoftypesofextendedrealitysystemsandenvironments.FIG.4Bshowsanexampleextendedrealityenvironment460includingonehead-mountedvirtualrealitydisplayandtwohapticdevices(e.g.,gloves),andinotherembodimentsanynumberand/orcombinationofthesecomponentsandothercomponentsmaybeincludedinanextendedrealitysystem.Forexample,insomeembodimentstheremaybemultiplehead-mounteddisplayseachhavinganassociatedhapticdevice,witheachhead-mounteddisplayandeachhapticdevicecommunicatingwiththesameconsole,portablecomputingdevice,orothercomputingsystem.
Whilehapticinterfacesmaybeusedwithvirtualrealitysystems,asshowninFIG.4B,hapticinterfacesmayalsobeusedwithaugmentedrealitysystems,asshowninFIG.4C.FIG.4Cisaperspectiveviewofauser475interactingwithanaugmentedrealitysystem480.Inthisexample,user475maywearapairofaugmentedrealityglasses485thatmayhaveoneormoredisplays487andthatarepairedwithahapticdevice490.Inthisexample,hapticdevice490maybeawristbandthatincludesapluralityofbandelements492andatensioningmechanism495thatconnectsbandelements492tooneanother.
Oneormoreofbandelements492mayincludeanytypeorformofactuatorsuitableforprovidinghapticfeedback.Forexample,oneormoreofbandelements492maybeconfiguredtoprovideoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Toprovidesuchfeedback,bandelements492mayincludeoneormoreofvarioustypesofactuators.Inoneexample,eachofbandelements492mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.Alternatively,onlyasinglebandelementorasubsetofbandelementsmayincludevibrotactors.
Hapticdevices405,410,470,and490mayincludeanysuitablenumberand/ortypeofhaptictransducer,sensor,and/orfeedbackmechanism.Forexample,hapticdevices405,410,470,and490mayincludeoneormoremechanicaltransducers,piezoelectrictransducers,and/orfluidictransducers.Hapticdevices405,410,470,and490mayalsoincludevariouscombinationsofdifferenttypesandformsoftransducersthatworktogetherorindependentlytoenhanceauser'sextendedrealityexperience.Inoneexample,eachofbandelements492ofhapticdevice490mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.
Insomeembodiments,thedata525obtainedviatheclientsystem505isassociatedwithoneormoreprivacysettings.Thedata525maybestoredonorotherwiseassociatedwithanysuitablecomputingsystemorapplication,suchas,forexample,asocial-networkingsystem,aclientsystem,athird-partysystem,amessagingapplication,aphoto-sharingapplication,abiometricdataacquisitionapplication,anartificial-realityapplication,avirtualassistantapplication,and/oranyothersuitablecomputingsystemorapplication.
Insomeembodiments,privacysettingsforthedata525mayspecifya“blockedlist”ofusersorotherentitiesthatshouldnotbeallowedtoaccesscertaininformationassociatedwiththedata525.Insomecases,theblockedlistmayincludethird-partyentities.Theblockedlistmayspecifyoneormoreusersorentitiesforwhichthedata525isnotvisible.
Privacysettingsassociatedwiththedata525mayspecifyanysuitablegranularityofpermittedaccessordenialofaccess.Asanexample,accessordenialofaccessmaybespecifiedforparticularusers(e.g.,onlyme,myroommates,myboss),userswithinaparticulardegree-of-separation(e.g.,friends,friends-of-friends),usergroups(e.g.,thegamingclub,myfamily),usernetworks(e.g.,employeesofparticularemployers,studentsoralumniofparticularuniversity),allusers(“public”),nousers(“private”),usersofthird-partysystems,particularapplications(e.g.,third-partyapplications,externalwebsites),othersuitableentities,oranysuitablecombinationthereof.Insomeembodiments,differentpiecesofthedata525ofthesametypeassociatedwithausermayhavedifferentprivacysettings.Inaddition,oneormoredefaultprivacysettingsmaybesetforeachpieceofdata525ofaparticulardata-type.
Althoughthesocialcommunicationplatform500isdescribedwithregardtogeneratingthehapticsignal535attheclientsystem505(a)ofthesendinguser,itshouldbeunderstoodthatthehapticsignal535canalternativelybegeneratedattheclientsystem505(b)ofthereceivinguseroracompletelydifferentremotesystem(e.g.,adistributedsocialnetworkingsystem)usingsimilarcomponentsandtechniquesdescribedherein.Moreover,thesocialcommunicationplatform500illustratesaone-wayhapticcommunicationwherethesendingusersendsahapticsignaltothereceivinguser,howeveritshouldbeunderstoodthatthehapticcommunicationcanbebidirectionalandtheclientsystem505(b)ofthereceivingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(a)ofthesendinguserandlikewisetheclientsystem505(a)ofthesendingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(b)ofthereceivinguser.Further,asendingusercanbroadcastthehapticsignalvianetwork540toapluralityofclientsystems505(b-n)associatedwithreceivingusersinsteadofasinglereceivinguser.
TouchCommunicationTechniques
TouchCommunicationUsingaLexiconofEmojis
FIG.6Aisablockdiagramillustratingcomponentsofasocialcommunicationsystem600forconvertinginputdata605tohapticoutput610usingalexiconofemojis615inaccordancewithvariousembodiments.Togeneratethehapticoutput610,inputdata605fromafirstuser(sendinguser)isprocessedbyanalgorithmusingthelexiconofemojis615toobtainacorrespondinghapticsignalthatistransmittedtoaseconduser(receivinguser)tooperatethehapticfeedbackdevice.Thehapticfeedbackdevicereceivesthetransmittedhapticsignals,translatesthehapticsignalsintothehapticoutput610,andtransmitsthehapticoutput610correspondingtothereceivedhapticsignalstoabodyoftheseconduser.
Insomeinstances,thelexiconofemojis615maybekey-valuestore,orkey-valuedatabase,whichisatypeofdatastoragesoftwareprogramthatstoresdataasasetofuniqueidentifiers,eachofwhichhaveanassociatedvalue.Thisdatapairingisknownasa“key-valuepair.”Theuniqueidentifieristhe“key”foranitemofdata,andavalueiseitherthedatabeingidentifiedorthelocationofthatdata.Although,thelexiconofemojis615isdescribedhereinasakey-valuedatabaseitshouldbeunderstoodthatotherdatabasedesignscouldbeusedwithoutdepartingfromthespiritandscopeofthepresentdisclosure.Forexampleinotherinstances,thelexiconofemojis615isarelationaldatabase,wheredataisstoredintablescomposedofrowsandcolumns.Thedatabasedeveloperspecifiesattributesofthedata(i.e.,emojisandassetsthereof)tobestoredinthetableupfront.Thiscreatessignificantopportunitiesforoptimizationssuchasdatacompressionandperformancearoundaggregationsanddataaccess.Theattributesofthedatamaybequeriedinasimilarfashionaskeysinthekey-valuedatabasetoidentifyemojisassociatedwithsuchattributes.
Thelexiconofemojis615maycomprisesanynumberofemojis620(A-N).Eachoftheemojis620isconfiguredwithacorrespondingelectroniccommunicationthatincludesavisualcomponent(showninFIG.6Basthecharacterineachillustration),anaudiocomponent(showninFIG.6Bastheverbalutteranceineachillustration),ahapticcomponent(showninFIG.6Casthehapticsignalpatternineachillustration),oracombinationthereof.Emojiswithavisualcomponent(e.g.,apictogram,logogram,orideogram)areassociatedwithinthelexicontoanimageorvideoasset(e.g.,ajpeg,gif,mov,orjsonfile).Emojiswithanaudiocomponentareassociatedwithinthelexicontoanaudioasset(e.g.,awayormp3file).Emojiswithahapticcomponentareassociatedwithinthelexicontoahapticsignal(e.g.,parameterinformationoninterval,pitch,amplitude,oracombinationthereofforatouchmessagetobeperceivedbyareceivinguser'sbody),whichcanbeconvertedintohapticoutput615.
Thehapticsignalforeachemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutput610thatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji(i.e.,theimageoraudiocomponentsupplementtheunderstandingofthehapticcomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,aperceptualscientist)togeneratepatternsforthehapticoutput610thatbestcommunicatetheemotiontoauser(i.e.,thehapticcomponenthasahighlikelihoodofconveyingtheemotiontoauserwithouttheimageoraudiocomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,auseroftheHMDdevice)togeneratepatternsforthehapticoutput610thatcustomizetouchcommunicatetoauser(i.e.,thehapticcomponentiscustomizedforconveyingtheemotiontoauserwithorwithouttheimageoraudiocomponent).
Alexiconsignalconverter625convertstheinputdata605intohapticsignals610usingthelexiconofemojis615.Thelexiconsignalconverter620maybeacomponentinasignalgenerator(e.g.,signalgenerator555describedwithrespecttoFIG.5).Thelexiconsignalconverter620comprisesaninputdataprocessingmodule630,apatternrecognitionmodule635,andaqueryengine640.Theinputprocessingmodule625determinesthecharacteristicsoftheinputdata605received(e.g.,text,audio,imagesorvideo,sensordata,orthelike)usingtheinputdatamodule630,identifiesakeyorattributeswithintheinputdata605usingthepatternrecognitionmodule635,andcommunicatesthekeyorattributestothequeryengine640forsearchingthelexiconofemojis615toidentifyoneormoreemojisassociatedwithanelectroniccommunication.
FIG.7isaflowchartillustratingaprocess700forconvertinginputdatatohapticoutputusingalexiconofemojisaccordingtovariousembodiments.TheprocessingdepictedinFIG.7maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.7anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.7depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or6A-6C,theprocessingdepictedinFIG.7maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.
Atstep705,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.
Atstep710,featuresareextractedfromtheinputdatathatcorrespondtoanelectroniccommunication.Theextractingcomprisesdeterminingcharacteristicsoftheinputdataandidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics.Thekeyorattributesaretheextractedfeatures.
Atstep715,anemoji(e.g.,ahapticemoji)isidentifiedfromalexiconofemojisbasedontheextractedfeatures.Theidentifyingtheemojicomprisesconstructingaqueryusingtheextractedfeaturesasparametersofthequeryandexecutingthequeryonthelexiconofemojis.
Atstep720,digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.
Atstep725,thedigitalassetsaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).
TouchCommunicationUsingAIBasedSystem
Apredictionmodel825canbeamachine-learningmodel,suchasaconvolutionalneuralnetwork(“CNN”),e.g.,aninceptionneuralnetwork,aresidualneuralnetwork(“Resnet”),orarecurrentneuralnetwork,e.g.,longshort-termmemory(“LSTM”)modelsorgatedrecurrentunits(“GRUs”)models,othervariantsofDeepNeuralNetworks(“DNN”)(e.g.,amulti-labeln-binaryDNNclassifierormulti-classDNNclassifier).Apredictionmodel125canalsobeanyothersuitableMLmodeltrainedforprovidingarecommendation,suchasaGenerativeadversarialnetwork(GAN),NaiveBayesClassifier,LinearClassifier,SupportVectorMachine,BaggingModelssuchasRandomForestModel,BoostingModels,ShallowNeuralNetworks,orcombinationsofoneormoreofsuchtechniques—e.g.,CNN-HMMorMCNN(Multi-ScaleConvolutionalNeuralNetwork).Themachine-learningpredictionsystem800mayemploythesametypeofpredictionmodelordifferenttypesofpredictionmodelsforpredictinghapticemojisforconveyingatouchmessage.Stillothertypesofpredictionmodelsmaybeimplementedinotherexamplesaccordingtothisdisclosure.
Totrainthevariouspredictionmodels825,thetrainingstage810iscomprisedoftwomaincomponents:datasetpreparationmodule830andmodeltrainingframework840.Thedatasetpreparationmodule830performstheprocessesofloadingdataassets845,splittingthedataassets845intotrainingandvalidationsets845a-nsothatthesystemcantrainandtestthepredictionmodels825,andpre-processingofdataassets845.Thesplittingthedataassets845intotrainingandvalidationsets845a-nmaybeperformedrandomly(e.g.,a90/10%or70/30%)orthesplittingmaybeperformedinaccordancewithamorecomplexvalidationtechniquesuchasK-FoldCross-Validation,Leave-one-outCross-Validation,Leave-one-group-outCross-Validation,NestedCross-Validation,ortheliketominimizesamplingbiasandoverfitting.
Themodeltrainingstage810outputstrainedmodelsincludingoneormoretrainedpredictionmodels860.Theoneormoretrainedpredictionmodels855maybedeployedandusedintheimplementationstage820topredictahapticemojiorhapticsignal865forconveyingatouchmessage.Forexample,predictionmodels860mayreceiveinputdata870(e.g.,agesturebyafirstuser)orcontextdata(e.g.,atextmessagereceivedbyaseconduser),andpredictahapticemojiorhapticsignalbasedonfeaturesandrelationshipsbetweenfeaturesextractedfromwithintheinputdata870.
FIG.9isaflowchartillustratingaprocess900topredicthapticemojisforconveyingatouchmessageaccordingtovariousembodiments.TheprocessingdepictedinFIG.9maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.9anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.9depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or8,theprocessingdepictedinFIG.9maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.
Atstep905,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.
Atstep910,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdata(e.g.,agesturebyafirstuser)andcontextdata(e.g.,atextmessagereceivedbyaseconduser).
Atoptionalstep915(instancesofpredictingahapticemoji),digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.
Atstep920,thedigitalassetsorhapticsignalaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).
LearningProgramtoFacilitateLearningoftheHapticOutput
Theinputdata1005maybetext,audio,imagesorvideo,sensordata,orthelike.Theadditionalinformation1030mayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.
Inotherinstances,wheretheartificialintelligencebasedsystem1020predictsahapticemojiorhapticsignal,thelearningmodule1025takesasinputthehapticsignal(orcorrespondinghapticemojiinformation)anddetermines,usingoneormorerules,logic,ormachine-learningmodels,additionalinformation1030(e.g.,anaudiocomponentoranimagecomponent)thatcouldbeusedtosupplementthehapticsignal.Forexample,thelearningmodule1025mayuseoneormorerules,logic,ormachine-learningmodelstodetermineatextcomponent,anaudiocomponentand/oranimagecomponentthatcouldbeusedtosupplementthehapticsignal(orcorrespondinghapticemojiinformation),thenretrievethetextcomponent,theaudiocomponentand/ortheimagecomponentfromthedatastoragedevice1035orasecondarydatastoragedevice1040(e.g.,aremotestoragedeviceorthird-partystoragedevice)andforwardalongwiththehapticcomponent.
Thebenefitsandadvantagesofthisapproacharethatthereceivingusermaymoreeasilylearnthehapticoutputpatternsandassociatedmeaningbasedonassociatedvisualand/oraudiocontext.Forexample,thelearningmodule1025maybeconfiguredtotransmitthehapticsignalalongwithavisualand/oraudiosignaltothereceivingusersuchthatwhentheuserfeelsthehapticoutput1010basedonthehapticsignaltheuserconcurrentlyvisualizesonadisplaythevisualsignal(e.g.,avisualemoji)and/orhearstheaudiosignal,theuserlearnstoassociatethehapticoutputpatternwithanassociatedvisualand/oraudiocontext.Thevisualand/oraudiosignalmaybeobtainedaspartoftheadditionalinformation1030andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.Additionallyoralternatively,thevisualand/oraudiosignalmaybegeneratedbasedontheadditionalinformation1030bythelearningmodule1025,andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.
FIG.11isaflowchartillustratingaprocess1100forsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.11maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.11anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.11depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,or10theprocessingdepictedinFIG.11maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.
Atstep1105,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.
Atstep1110,anemoji(e.g.,ahapticemoji)orhapticsignalisidentifiedfromalexiconofemojisoranartificialintelligencebasedsystem,asdescribedwithrespecttoFIGS.6A-6C,7,8,and9.
Atstep1115,additionalinformationisobtainedbasedontheemojiorhapticsignal.Theadditionalinformationmayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.
Atstep1120,thehapticsignalandadditionalinformationaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).
ReceivingtheHapticSignalandGeneratingtheHapticOutput
Theprocessor1215readsinstructionsfromthememory1230andexecutesthemtoperformvariousoperations.Theprocessor1215maybeembodiedusinganysuitableinstructionsetarchitectureandmaybeconfiguredtoexecuteinstructionsdefinedinthatinstructionsetarchitecture.Theprocessor1215maybegeneral-purposeorembeddedprocessorsusinganyofavarietyofinstructionsetarchitectures(ISAs),suchasthex86,PowerPC,SPARC,RISC,ARMorMIPSISAs,oranyothersuitableISA.AlthoughasingleprocessorisillustratedinFIG.12,thesignalgenerator1200mayincludemultipleprocessors.
Thehapticinterfacecircuit1220isacircuitthatinterfaceswiththecutaneousactuators1205.Thehapticinterfacecircuit1220generatesactuatorsignals1210basedoncommandsfromtheprocessor1215.Forthispurpose,thehapticinterfacecircuit1220mayinclude,forexample,adigital-to-analogconverter(DAC)forconvertingdigitalsignalsintoanalogsignals.Thehapticinterfacecircuit1220mayalsoincludeanamplifiertoamplifytheanalogsignalsfortransmittingtheactuatorsignals1210overcablesbetweenthesignalgenerator1200andthecutaneousactuators1205.Insomeembodiments,thehapticinterfacecircuit1220communicateswiththeactuators1205wirelessly.Insuchembodiments,thehapticinterfacecircuit1220includescomponentsformodulatingwirelesssignalsfortransmittingtotheactuator1205overwirelesschannels.
Thecommunicationmodule1225(e.g.,receivingdevice570describedwithrespecttoFIG.5)ishardwareorcombinationsofhardware,firmwareandsoftwareforcommunicatingwithothercomputingdevices.Thecommunicationmodule1225may,forexample,enablethesignalgenerator1200tocommunicatewithasocialnetworkingsystem,atransmittingorsendingclientsystem,oranelectroniccommunicationsourceoverthenetwork.Thecommunicationmodule1225maybeembodiedasanetworkcard.Thememory1230isanon-transitorycomputerreadablestoragemediumforstoringsoftwaremodules.Softwaremodulesstoredinthememory1230mayinclude,amongothers,applications1240andahapticsignalprocessor1245(e.g.,thesignalprocessor547describedwithrespecttoFIG.5).Thememory1230mayincludeothersoftwaremodulesnotillustratedinFIG.8,suchasanoperatingsystem.Theapplications1240mayusehapticoutputviathecutaneousactuators1205toperformvariousfunctions,suchaselectroniccommunication,gaming,andentertainment.
Thesignalgenerator1200asillustratedinFIG.12ismerelyillustrativeandvariousmodificationmaybemadetothesignalgenerator1200.Forexample,insteadofembodyingthesignalgenerator1200asasoftwaremodule,thesignalgenerator1200maybeembodiedasahardwarecircuit,oracombinationofhardwarecircuitsandsoftwaremodules.
FIG.13isaflowchartillustratingaprocess1300forgeneratingahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.13maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.13anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.13depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,10,or12theprocessingdepictedinFIG.13maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.
Atstep1315,theoneormoreactuatorsignalsaregeneratedbasedontheparametersdeterminedfortheoneormoreactuatorsignals.Thegeneratingoftheoneormoreactuatorsignalsmayincludeperformingdigitaltoanalogconversionofthehapticsignaland/oroneormoreactuatorsignals.
Atstep1320,theoneormoreactuatorsignalsaretransmittedtooneormorecorrespondingcutaneousactuators.
Atstep1325,oneormorecutaneousactuatorsgeneratehapticoutputinaccordancewiththecorrespondingoneormoreactuatorsignals,whichcauseoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperatureontheseconduser'sbody.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).
ADDITIONALCONSIDERATIONS
Althoughspecificexampleshavebeendescribed,variousmodifications,alterations,alternativeconstructions,andequivalentsarepossible.Examplesarenotrestrictedtooperationwithincertainspecificdataprocessingenvironments,butarefreetooperatewithinapluralityofdataprocessingenvironments.Additionally,althoughcertainexampleshavebeendescribedusingaparticularseriesoftransactionsandsteps,itshouldbeapparenttothoseskilledintheartthatthisisnotintendedtobelimiting.Althoughsomeflowchartsdescribeoperationsasasequentialprocess,manyoftheoperationsmaybeperformedinparallelorconcurrently.Inaddition,theorderoftheoperationsmayberearranged.Aprocessmayhaveadditionalstepsnotincludedinthefigure.Variousfeaturesandaspectsoftheabove-describedexamplesmaybeusedindividuallyorjointly.
Further,whilecertainexampleshavebeendescribedusingaparticularcombinationofhardwareandsoftware,itshouldberecognizedthatothercombinationsofhardwareandsoftwarearealsopossible.Certainexamplesmaybeimplementedonlyinhardware,oronlyinsoftware,orusingcombinationsthereof.Thevariousprocessesdescribedhereinmaybeimplementedonthesameprocessorordifferentprocessorsinanycombination.
Wheredevices,systems,componentsormodulesaredescribedasbeingconfiguredtoperformcertainoperationsorfunctions,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitstoperformtheoperation,byprogrammingprogrammableelectroniccircuits(suchasmicroprocessors)toperformtheoperationsuchasbyexecutingcomputerinstructionsorcode,orprocessorsorcoresprogrammedtoexecutecodeorinstructionsstoredonanon-transitorymemorymedium,oranycombinationthereof.Processesmaycommunicateusingavarietyoftechniquesincludingbutnotlimitedtoconventionaltechniquesforinter-processcommunications,anddifferentpairsofprocessesmayusedifferenttechniques,orthesamepairofprocessesmayusedifferenttechniquesatdifferenttimes.
Specificdetailsaregiveninthisdisclosuretoprovideathoroughunderstandingoftheexamples.However,examplesmaybepracticedwithoutthesespecificdetails.Forexample,well-knowncircuits,processes,algorithms,structures,andtechniqueshavebeenshownwithoutunnecessarydetailinordertoavoidobscuringtheexamples.Thisdescriptionprovidesexampleexamplesonly,andisnotintendedtolimitthescope,applicability,orconfigurationofotherexamples.Rather,theprecedingdescriptionoftheexampleswillprovidethoseskilledintheartwithanenablingdescriptionforimplementingvariousexamples.Variouschangesmaybemadeinthefunctionandarrangementofelements.
Thespecificationanddrawingsare,accordingly,toberegardedinanillustrativeratherthanarestrictivesense.Itwill,however,beevidentthatadditions,subtractions,deletions,andothermodificationsandchangesmaybemadethereuntowithoutdepartingfromthebroaderspiritandscopeassetforthintheclaims.Thus,althoughspecificexampleshavebeendescribed,thesearenotintendedtobelimiting.Variousmodificationsandequivalentsarewithinthescopeofthefollowingclaims.
Intheforegoingdescription,forthepurposesofillustration,methodsweredescribedinaparticularorder.Itshouldbeappreciatedthatinalternateexamples,themethodsmaybeperformedinadifferentorderthanthatdescribed.Itshouldalsobeappreciatedthatthemethodsdescribedabovemaybeperformedbyhardwarecomponentsormaybeembodiedinsequencesofmachine-executableinstructions,whichmaybeusedtocauseamachine,suchasageneral-purposeorspecial-purposeprocessororlogiccircuitsprogrammedwiththeinstructionstoperformthemethods.Thesemachine-executableinstructionsmaybestoredononeormoremachinereadablemediums,suchasCD-ROMsorothertypeofopticaldisks,floppydiskettes,ROMs,RAMs,EPROMs,EEPROMs,magneticoropticalcards,flashmemory,orothertypesofmachine-readablemediumssuitableforstoringelectronicinstructions.Alternatively,themethodsmaybeperformedbyacombinationofhardwareandsoftware.
Wherecomponentsaredescribedasbeingconfiguredtoperformcertainoperations,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitsorotherhardwaretoperformtheoperation,byprogrammingprogrammableelectroniccircuits(e.g.,microprocessors,orothersuitableelectroniccircuits)toperformtheoperation,oranycombinationthereof.
Whileillustrativeexamplesoftheapplicationhavebeendescribedindetailherein,itistobeunderstoodthattheinventiveconceptsmaybeotherwisevariouslyembodiedandemployed,andthattheappendedclaimsareintendedtobeconstruedtoincludesuchvariations,exceptaslimitedbythepriorart.